Next Article in Journal
Visible Light Spectrum Extraction from Diffraction Images by Deconvolution and the Cepstrum
Next Article in Special Issue
Towards a Connected Mobile Cataract Screening System: A Future Approach
Previous Article in Journal
SpineDepth: A Multi-Modal Data Collection Approach for Automatic Labelling and Intraoperative Spinal Shape Reconstruction Based on RGB-D Data
Previous Article in Special Issue
Unsupervised Approaches for the Segmentation of Dry ARMD Lesions in Eye Fundus cSLO Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey

by
Vasudevan Lakshminarayanan
1,*,
Hoda Kheradfallah
1,
Arya Sarkar
2 and
Janarthanam Jothi Balaji
3
1
Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
2
Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India
3
Department of Optometry, Medical Research Foundation, Chennai 600 006, India
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(9), 165; https://doi.org/10.3390/jimaging7090165
Submission received: 30 June 2021 / Revised: 23 August 2021 / Accepted: 24 August 2021 / Published: 27 August 2021
(This article belongs to the Special Issue Frontiers in Retinal Image Processing)

Abstract

:
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016–2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.

1. Introduction

Diabetic retinopathy (DR) is a major cause of irreversible visual impairment and blindness worldwide [1]. This etiology of DR is due to chronic high blood glucose levels, which cause retinal capillary damage, and mainly affects the working-age population. DR begins at a mild level with no apparent visual symptoms but it can progress to severe and proliferated levels and progression of the disease can lead to blindness. Thus, early diagnosis and regular screening can decrease the risk of visual loss to 57.0% as well as decreasing the cost of treatment [2].
DR is clinically diagnosed through observation of the retinal fundus either directly or through imaging techniques such as fundus photography or optical coherence tomography. There are several standard DR grading systems such as the Early Treatment Diabetic Retinopathy Study (ETDRS) [3]. ETDRS separates fine detailed DR characteristics using multiple levels. This type of grading is done upon all seven retinal fundus Fields of View (FOV). Although ETDRS [4] is the gold standard, due to implementation complexity and technical limitations [5], alternative grading systems are also used such as the International Clinical Diabetic Retinopathy (ICDR) [6] scale which is accepted in both clinical and Computer-Aided Diagnosis (CAD) settings [7]. The ICDR scale defines 5 severity levels and 4 levels for Diabetic Macular Edema (DME) and requires fewer FOVs [6]. The ICDR levels are discussed below and are illustrated in Figure 1.
  • No Apparent Retinopathy: No abnormalities.
  • Mild Non-Proliferative Diabetic Retinopathy (NPDR): This is the first stage of diabetic retinopathy, specifically characterized by tiny areas of swelling in retinal blood vessels known as Microaneurysms (MA) [8]. There is an absence of profuse bleeding in retinal nerves and if DR is detected at this stage, it can help save the patient’s eyesight with proper medical treatment (Figure 1A).
  • Moderate NPDR: When left unchecked, mild NPDR progresses to a moderate stage when there is blood leakage from the blocked retinal vessels. Additionally, at this stage, Hard Exudates (Ex) may exist (Figure 1B). Furthermore, the dilation and constriction of venules in the retina causes Venous Beadings (VB) which are visible ophthalmospically [8].
  • Severe NPDR: A larger number of retinal blood vessels are blocked in this stage, causing over 20 Intra-retinal Hemorrhages (IHE; Figure 1C) in all 4 fundus quadrants or there are Intra-Retinal Microvascular Abnormalities (IRMA) which can be seen as bulges of thin vessels. IRMA appears as small and sharp-border red spots in at least one quadran. Furthermore, there can be a definite evidence of VB in over 2 quadrants [8].
  • Proliferative Diabetic Retinopathy (PDR): This is an advanced stage of the disease that occurs when the condition is left unchecked for an extended period of time. New blood vessels form in the retina and the condition is termed Neovascularization (NV). These blood vessels are often fragile, with a consequent risk of fluid leakage and proliferation of fibrous tissue [8]. Different functional visual problems occur at PDR, such as blurriness, reduced field of vision, and even complete blindness in some cases (Figure 1D).
DR detection has two main steps: screening and diagnosis. For this purpose, fine pathognomonic DR signs in initial stages are determined normally, after dilating pupils (mydriasis). Then, DR screening is performed through slit lamp bio-microscopy with a + 90.0 D lens, and direct [9]/indirect ophthalmoscopy [10]. The next step is to diagnose DR which is done through finding DR-associated lesions and comparing with the standard grading system criteria. Currently, the diagnosis step is done manually. This procedure is costly, time consuming and requires highly trained clinicians who have considerable experience and diagnostic precision. Even if all these resources are available there is still the possibility of misdiagnosis [11]. This dependency on manual evaluation makes the situation challenging. In addition, in year 2020, the number of adults worldwide with DR, and vision-threatening DR was estimated to be 103.12 million, and 28.54 million. By the year 2045, the numbers are projected to increase to 160.50 million, and 44.82 million [12]. In addition, in developing countries where there is a shortage of ophthalmologists [13,14] as well as access to standard clinical facilities. This problem also exists in underserved areas of the developed world.
Recent developments in CAD techniques, which are defined in the subscope of artificial intelligence (AI), are becoming more prominent in modern ophthalmology [15] as they can save time, cost and human resources for routine DR screening and involve lower diagnostic error factors [15]. CAD can also efficiently manage the increasing number of afflicted DR patients [16] and diagnose DR in early stages when fewer sight threatening effects are present. The scope of AI based approaches are divided between Machine Learning-based (ML) and Deep Learning-based (DL) solutions. These techniques vary depending on the imaging system and disease severity. For instance, in early levels of DR, super-resolution ultrasound imaging of microvessels [17] is used to visualize the deep ocular vasculature. On this imaging system, a successful CAD method applied a DL model for segmenting lesions on ultrasound images [18]. The widely applied imaging methods such as Optical Coherence Tomography (OCT), OCT Angiography (OCTA), Ultrawide-field fundus (UWF) and standard 45° fundus photography are covered in this review. In addition to the mentioned imaging methods, Majumder et al. [15] reported a real time DR screening procedure using a smartphone camera.
The main purpose of this review is to analyze 114 articles published within the last 6 years that focus on the detection of DR using CAD techniques. These techniques have made considerable progress in performance with the use of ML and DL schemes that employ the latest developments in Deep Convolutional Neural Networks (DCNNs) architectures for DR severity grading, progression analysis, abnormality detection and semantic segmentation. An overview of ophthalmic applications of convolutional neural networks is presented in [19,20,21,22].

2. Methods

2.1. Literature Search Details

For this review, literature from 5 publicly accessible databases were surveyed. The databases were chosen based on their depth, their ease of accessibility, and their popularity. These 5 databases are:
Google Scholar has been chosen to fill the gaps in the search strategy by identifying literature from multiple sources, along with articles that might be missed in manual selection from the other four databases. The articles of this topic within the latest six year time-period show that the advances in AI-enabled DR detection has increased considerably. Figure 2 visualizes the articles matching with this topic. This figure was generated using the PubMed results.
At the time of writing this review, a total of 10,635 search results were listed in the PubMed database for this time period when just the term “diabetic retinopathy” was used. The MEDLINE database is arguably the largest for biomedical research. In addition, some resources from the National Library of Medicine which is a part of the U.S. National Institutes of Health, were employed in this review.
A search of the IEEE Xplore library and the SPIE digital library for the given time period reports a total of 812 and 332 search results respectively. The IEEE Xplore and SPIE libraries contain only publications of these two professional societies. Further databases were added to this list by collecting papers from non-traditional sources such as pre-print servers such as ArXiv. In Figure 3, by using data from all sources, we plot the number of papers published as a function of the year.
The scope of this review is limited to “automated detection and grading of diabetic retinopathy using fundus & OCT images”. Therefore, to make the search more manageable, a combination of relevant keywords was applied using the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) search strategy [23]. Keywords used in the PICO search were predetermined. A combination of (“DR” and (“DL” or “ML” or “AI”)) and (fundus or OCT) was used which reduced the initial 10,635 search results in PubMed to just 217 during the period under consideration. A manual process of eliminating duplicate search results carried out across the results from all obtained databases resulted in a total number of 114 papers.
Overall, the search strategy for identifying relevant research for the review involved three main steps:
  • Using the predefined set of keywords and logical operators, a small set of papers were identified in this time range (2016–2021).
  • Using a manual search strategy, the papers falling outside the scope of this review were eliminated.
  • The duplicate articles (i.e., the papers occurring in multiple databases) were eliminated to obtain the set of unique articles.
The search strategy followed by this review abides by the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 checklist [24], and the detailed search and identification pipeline is shown in Figure 4.

2.2. Dataset Search Details

The backbone of any automated detection model whether ML-based, DL-based, or multi-model-based, is the dataset. High-quality data with correct annotations have extreme importance in image feature extraction and training the DR detection model, properly. In this review, a comprehensive list of datasets has been created and discussed. A previously published paper [25] also gives a list of ophthalmic image datasets, containing 33 datasets that can be used for training DR detection and grading models. The paper by Khan et al. [25] highlighted 33 of the 43 datasets presented in Table 1. However, some databases which are popular and publicly accessible are not listed by Khan et al. [25], e.g., UoA-DR [26], Longitudinal DR screening data [27], FGADR [28] etc. In this review, we identified additional datasets that are available to use. The search strategy for determining relevant DR detection datasets is as follows:
Appropriate results from all 5 of the selected databases (PubMed, PUBLONS, etc.) were searched manually. We gathered information about names of datasets for DR detection and grading.
4.
The original papers and websites associated with each dataset were analyzed and a systematic, tabular representation of all available information was created.
5.
The Google dataset search and different forums were checked for missing dataset entries and step 2 was repeated for all original datasets found.
6.
A final comprehensive list of datasets and its details was generated and represented in Table 1.
A total of 43 datasets were identified employing the search strategy given above. Upon further inspection, a total number of 30 datasets were identified as open access (OA), i.e., can be accessed easily without any permission or payment. Of the total number of datasets, 6 have restricted access. However, the databases can be accessed with the permission of the author or institution; the remaining 7 are private and cannot be accessed. These datasets were used to create a generalized model because of the diversity of images (multi-national and multi-ethnic groups).

3. Results

3.1. Dataset Search Results

This section provides a high-level overview of the search results that were obtained using the datasets, as well as using different review articles on datasets in the domain of ophthalmology, e.g., Khan et al. [25]. Moreover, different leads obtained from GitHub and other online forums are also employed in this overview. Thus, 43 datasets were identified and a general overview of the datasets is systematically presented in this section. The datasets reviewed in this article are not limited to 2016 to 2021 and could have been released before that. The list of datasets and their characteristics are shown in Table 1 below. Depending on the restrictions and other proforma required for accessing the datasets, the list has been divided into 3 classes; they are:
  • Public open access (OA) datasets with high quality DR grades.
  • DR datasets, that can be accessed upon request, i.e., can be accessed by filling necessary agreements and forms for fair usage; they are a sub-type of (OA) databases and are termed Access Upon Request (AUR) in the table.
  • Private datasets from different institutions that are not publicly accessible or require explicit permission can access are termed Not Open Access (NOA).

3.2. Diabetic Retinopathy Classification

This section discusses the classification approaches used for DR detection. The classification can be for the detection of DR [68], referable DR (RDR) [66,69], vision threatening DR (vtDR) [66], or to analyze the proliferation level of DR using the ICDR system. Some studies also considered Diabetic Macular Edema (DME) [69,70]. Recent ML and DL methods have produced promising results in automated DR diagnosis.
Thus, multiple performance metrics such as accuracy (ACC), sensitivity (SE) or recall, specificity (SP) or precision, area under the curve (AUC), F1 and Kappa scores are used to evaluate the classification performance. Table 2 and Table 3 present a brief overview of articles that used fundus images for DR classification, and articles that classify DR on fundus images using novel preprocessing techniques, respectively. Table 4 lists the recent DR classification studies that used OCT and OCTA images. In the following subsections, we provide the details of ML and DL aspects and evaluate the performance of prior studies in terms of quantitative metrics.

3.2.1. Machine Learning Approaches

In this review, 9 out of 93 classification-based studies employed machine learning approaches and 1 article used un-ML method for detecting and grading DR. Hence, in this section, we present an evaluation over various ML-based feature extraction and decision-making techniques that have been employed in the selected primary studies to construct DR detection models. In general, six major distinct ML algorithms were used in these studies. These are: principal component analysis (PCA) [70,71], linear discriminant analysis (LDA)-based feature selection [71], spatial invariant feature transform (SIFT) [71], support vector machine (SVM) [16,71,72,73], k nearest neighbor (KNN) [72] and random forest (RF) [74].
In addition to the widely used ML methods, some studies such as [75] presented a pure ML model with an accuracy of over 80% including distributed Stochastic Neighbor Embedding (t-SNE) for image dimensionality reduction in combination with ML Bagging Ensemble Classifier (ML-BEC). ML-BEC improves classification performance by using the feature bagging technique with a low computational time. Ali et al. [57] focused on five fundamental ML models, named sequential minimal optimization (SMO), logistic (Lg), multi-layer perceptron (MLP), logistic model tree (LMT), and simple logistic (SLg) in the classification level. This study proposed a novel preprocessing method in which the Region of Interest (ROI) of lesions is segmented with the clustering-based method and K-Means; then, Ali et al. [57] extracted features of the histogram, wavelet, grey scale co-occurrence, and run-length matrixes (GLCM and GLRLM) from the segmented ROIs. This method outperformed previous models with an average accuracy of 98.83% with the five ML models. However, an ML model such as SLg performs well; the required classification time is 0.38 with Intel Core i3 1.9 gigahertz (GHz) CPU, 64-bit Windows 10 operating system and 8 gigabytes (GB) memory. This processing time is higher than previous studies.
We can also use ML method for OCT and OCTA for DR detection. Recently, LiU et al. [76] deployed four ML models of logistic regression (LR), logistic regression regularized with the elastic net penalty (LR-EN), support vector machine (SVM), and the gradient boosting tree named XGBoost with over 246 OCTA wavelet features and obtained ACC, SE, and SP of 82%, 71%, and 77%, respectively. This study, despite inadequate results, has the potential to reach higher scores using model optimization and fine-tuning hyper parameters. These studies show a lower overall performance if using a small number of feature types and simple ML models are used. Dimensionality reduction is an application of ML models which can be added in the decision layer of CAD systems [77,78].
The ML methods in combination with DL networks can have a comparable performance with the pure DL models. Narayanan et al. [78] applied a SVM model for the classification of features obtained from the state of art DNNs that are optimized with PCA [78]. This provided an accuracy of 85.7% on preprocessed images. In comparison with methods such as AlexNet, VGG, ResNet, and Inception-v3, the authors report an ACC of 99.5%. In addition, they also found that this technique is more applicable with considerably less computational cost.

3.2.2. Deep Learning Approaches

This section gives an overview of DL algorithms that have been used. Depending on the imaging system, image resolution, noise level, and contrast as well as the size of the dataset, the methods can vary. Some studies propose customized networks such as the work done by Gulshan et al. [69], Gargeya et al. [68], Rajalakshmi et al. [79], Riaz et al. [80]. These networks have lower performance outcomes than the state of art networks such as VGG, ResNet, Inception, and DenseNet but the fewer layers make them more generalized, suitable for training with small datasets, and computationally efficient. Quellec et al. [81] applied L2 regularization over the best performed DCNN in the KAGGLE competition for DR detection named o-O. Another example of customized networks is the model proposed by Sayres et al. [82], which showed 88.4%, 91.5%, 94.8% for ACC, SE, and SP, respectively, over a small subset of 2000 images obtained from the EyePACS database. However, the performance of this network is lower than the results obtained from Mansour et al. [72] which used a larger subset of the EyePACS images (35,126 images). Mansour et al. [72] deployed more complex architectures such as the AlexNet on the extracted features of LDA and PCA that generated better results than Sayres et al. [82] with 97.93%, 100%, and 93% ACC, SE, and SP, respectively. Such DCNNs should be used with large datasets since the large number of images used in the training reduces errors. If a deep architecture is applied for a small number of observations, it might cause overfitting in which the performance over the test data is not as well as expected on the training data. On the other hand, the deepness of networks does not always guarantee higher performance, meaning that they might face problems such as vanishing or exploding gradient which will have to be addressed by redesigning the network to simpler architectures. Furthermore, the deep networks extract several low and high-level features. As these image features get more complicated, it becomes more difficult to interpret. Sometimes, high-level attributes are not clinically meaningful. For instance, the high-level attributes may refer to an existing bias in all images belonging to a certain class, such as light intensity and similar vessel patterns, that are not considered as a sign of DR but the DCNN will consider them as critical features. Consequently, this fact makes the output predictions erroneous.
In the scope of DL-based classification, Hua et al. [83] designed a DL model named Trilogy of Skip-connection Deep Networks (Tri-SDN) over the pretrained base model ResNet50 that applies skip connection blocks to make the tuning faster yielding to ACC and SP of 90.6% and 82.1%, respectively, which is considerably better than the values of 83.3% and 64.1% compared with the situation when skip connection blocks are not used.
There are additional studies that do not focus on proposing new network architectures but enhance the preprocessing step. The study done by Pao et al. [84] presents bi-channel customized CNN in which an image enhancement technique known as unsharp mask is used. The enhanced images and entropy images are used as the inputs of a CNN with 4 convolutional layers with results of 87.83%, 77.81%, 93.88% over ACC, SE, and SP. These results are all higher than the case of analysis without preprocessing (81.80% 68.36%, 89.87%, respectively).
Shankar et al. [85] proposed another approach to preprocessing using Histogram-based segmentation to extract regions containing lesions on fundus images. As the classification step, this article utilized the Synergic DL (SDL) model and the results indicated that the presented SDL model offers better results than popular DCNNs on MESSIDOR 1 database in terms of ACC, SE, SP.
Furthermore, classification is not limited to the DR detection and DCNNs can be applied to detect the presence of DR-related lesions such as the study reported by Wang et al. They cover twelve lesions in their study: MA, IHE, superficial retinal hemorrhages (SRH), Ex, CWS, venous abnormalities (VAN), IRMA, NV at the disc (NVD), NV elsewhere (NVE), pre-retinal FIP, VPHE, and tractional retinal detachment (TRD) with average precision and AUC 0.67 and 0.95, respectively; however, features such as VAN have low individual detection accuracy. This study provides essential steps for DR detection based on the presence of lesions that could be more interpretable than DCNNs which act as black boxes [86,87,88].
There are explainable backpropagation-based methods that produce heatmaps of the lesions associated with DR such as the study done by Keel et al. [89], which highlights Ex, HE, and vascular abnormalities in DR diagnosed images. These methods have limited performance providing generic explanations which might be inadequate as clinically reliable. Table 2, Table 3 and Table 4 briefly summarizes previous studies on DR classification with DL methods.

3.3. Diabetic Retinopathy Lesion Segmentation

The state-of-the-art DR classification machines [68,69] identify referable DR identification without directly taking lesion information into account. Therefore, their predictions lack clinical interpretation, despite their high accuracy. This black box nature of DCNNs is the major problem that makes them unsuitable for clinical application [86,152,153] and has made the topic of eXplainable AI (XAI) of major importance [153]. Recently, visualization techniques such as gradient-based XAI have been widely used for evaluating networks. However, these methods with generic heatmaps only highlight the major contributing lesions and hence are not suitable for detection of DR with multiple lesions and severity. Thus, some studies focused on the lesion-based DR detection instead. In general, we found 20 papers that do segmentation of the lesions, such as MA (10 articles), Ex (9 articles) and IHE, VHE, PHE, IRMA, NV, CWS. In the following sections, we discuss the general segmentation approaches. The implementation details of each article are accessible in Table 5 and Table 6 based on its imaging type.

3.3.1. Machine Learning and Un-Machine Learning Approaches

In general, using ML methods with a high processing speed, low computational cost, and interpretable decisions is preferred to DCNNs. However, the automatic detection of subtle lesions such as MA did not reach acceptable values. In this review, we collected 2 pure ML-involved models and 6 un-ML methods. As reported in a study by Ali Shah at el. [154], they detected MA using color, Hessian and curvelet-based feature extraction and achieved a SE of 48.2%. Huang et al. [155] focused on localizing NV through using the Extreme Learning Machine (ELM). This study applied Standard deviation, Gabor, differential invariant, and anisotropic filters for this purpose and with the final classifier applying ELM. This network performed as well as an SVM with lower computational time (6111 s vs. 6877 s) with a PC running the Microsoft Windows 7 operating system with a Pentium Dual-Core E5500 CPU and 4 GB memory. For the segmentation task, the preprocessing step had a fundamental rule which had a direct effect on the outputs. The preprocessing techniques varied depending on the lesion type and the dataset properties. Orlando et al. applied a combination of DCNN extracted features and manually designed features using image illumination correction, CLAHE contrast enhancement, and color equalization. Then, this high dimensionality feature vector was fed into an RF classifier to detect lesions and achieved an AUC score of 0.93, which is comparable with some DCNN models [81,137,141].
Some studies used un-ML methods for detection of exudates such as that of Kaur et al. [156], who proposed a pipeline consisting of a vessel and optic disk removal step and used a dynamic thresholding method for detection of CWS and Ex. Prior to this study, Imani et al. [157] also did the same process with the focus on Ex on a smaller dataset. In their study, they employed additional morphological processing and smooth edge removal to reduce the detection of CWS as Ex. This article reported the SE and SP of 89.1% and 99.9% and had an almost similar performance compared to Kaur’s results with 94.8% and 99.8% for SE and SP, respectively. Further description of the recent studies on lesion segmentation with ML approach can be found in Table 5 and Table 6.

3.3.2. Deep Learning Approaches

Recent works show that DCNNs can produce promising results in automated DR lesion segmentation. DR lesion segmentation is mainly focused on fundus imaging. However, some studies apply a combination of fundus and OCT. Holmberg et al. [158] proposed a retinal layer extraction pipeline to measure retinal thickness with Unet. Furthermore, Yukun Guo et al. [159] applied DCNNs for avascular zone segmentation from OCTA images and received the accuracy of 87.0% for mild to moderate DR and 76.0% for severe DR.
Other studies mainly focus on DCNNS applied to fundus images which give a clear view of existing lesions on the surface of the retina. Other studies such as Lam et al. [160] deployed state of the art DCNNS to detect the existence of DR lesions in image patches using AlexNet, ResNet, GoogleNet, VGG16, and Inception v3 achieving 98.0% accuracy on a subset of 243 fundus images obtained from EyePACS. Wang et al. [28] also applied Inception v3 as the feature map in combination with FCN 32 s as the segmentation part. They reported SE values of 60.7%, 49.5%, 28.3%, 36.3%, 57.3%, 8.7%, 79.8%, and 0.164 over PHE, Ex, VHE, NV, CWS, FIP, IHE, MA, respectively. Quellec et al. [81] focused on four lesions CWS, Ex, HE, and MA using a predefined DCNN architecture named o-O solution and reported the values of 62.4%, 52.2%, 44.9%, and 31.6% over CWS, Ex, HE, and MA for SE, respectively, which shows a slightly better performance for CWS and Ex than Wang et al. [140] and considerably better on MA than Wang et al. [141]. On the other hand, Wang et al. [141] performed better in HE detection. Further details of these article and others can be found in the Table 5 and Table 6.

4. Conclusions

Recent studies for DR detection are mainly focused on automated methods known as CAD systems. In the scope of the CAD system for DR, there are two major approaches known as first classification and staging DR severity and second segmentation of lesions such as MA, HE, Ex, CWS associated with DR.
The DR databases are categorized into public databases (36 out of 43) and private databases (7 out of 43). These databases contain fundus and OCT retinal images, and among these two imaging modalities, fundus photos are used in 86.0% of the published studies. Several public large fundus datasets are available online. The images might have been taken with different systems that affect image quality. Furthermore, some of the image-wise DR labels can be erroneous. The image databases that provide lesion annotations constitute only a small portion of the databases that require considerable resources for pixel-wise annotation. Hence, some of them contain fewer images than image-wise labeled databases. Furthermore, Lesion annotations requires intra-annotator agreement and high annotation precision. These factors make the dataset error sensitive, and its quality evaluation might become complicated.
The DR classification needs a standard grading system validated by clinicians. ETDRS is the gold standard grading system proposed for DR progression grading. Since this grading type needs fine detail evaluation and access to all 7 FOV fundus images, these issues make the use of ETDRS limited. Thus, ICDR with less precise scales is applicable for 1 FOV images to detect the DR severity levels.
The classification and grading DR can be divided into two main approaches, namely, ML-based and DL-based classification. The ML/DL-based DR detection has a generally better performance than ML/DL-based DR grading using the ICDR scale which needs to extract higher-level features associated with each level of DR [57,71]. The evaluation results proved that the DCNN architectures can achieve higher performance scores when large databases are used [72]. There is a trade-off between the performance on one side and the architecture complexity, processing time, and the lack of interpretability over the network’s decisions and extracted features on the other side. Thus, some recent works have proposed semi-DCNN models containing both DL-based and ML-based models acting as classifier or feature extractor [71,72]. The use of regularization techniques is another solution to reduce the complexity of DCNN models [81].
The second approach for CAD-related studies in DR is pixel-wise lesion segmentation or image-wise lesion detection. The main lesions of DR are MA, Ex, HE, CWS. These lesions have a different detection difficulty which directly affects the performance of the proposed pipeline. Among these lesions, the annotation of MA is more challenging [28,167]. Since this lesion is difficult to detect and is the main sign of DR in early stages, some studies focused on the pixel-wise segmentation of this lesion with DCNNs and achieved high enough scores [166]. Although some of the recent DCNN-based works exhibit high performance in term of the standard metrics, the lack of interpretability may not be sufficiently valid for real-life clinical applications. This interpretability brings into the picture the concept of XAI. Explainability studies aim to show the features that influence the decision of a model the most. Singh et al. [87] have reviewed the currently used explainability methods. There is also the need for a large fundus database with high precision annotation of all associated DR lesions to help in designing more robust pipelines with high performance.

Author Contributions

Conceptualization, V.L. and J.J.B.; methodology, V.L. and J.J.B.; dataset constitution, H.K. and A.S.; writing—original draft preparation, H.K., A.S. and J.J.B.; writing—review and editing, V.L.; project administration, V.L. and J.J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by a DISCOVERY grant to VL from the Natural Sciences and Engineering Research Council of Canada.

Institutional Review Board Statement

Not applicable. Study does not involve humans or animals.

Informed Consent Statement

Study did not involve humans or animals and therefore informed consent is not applicable.

Data Availability Statement

This is review of the published articles, and all the articles are publicly available.

Acknowledgments

A.S. acknowledges MITACS, Canada, for the award of a summer internship.

Conflicts of Interest

The authors have no conflict of interest.

References

  1. Steinmetz, J.D.; A Bourne, R.R.; Briant, P.S.; Flaxman, S.R.; Taylor, H.R.B.; Jonas, J.B.; Abdoli, A.A.; Abrha, W.A.; Abualhasan, A.; Abu-Gharbieh, E.G.; et al. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160. [Google Scholar] [CrossRef]
  2. Oh, K.; Kang, H.M.; Leem, D.; Lee, H.; Seo, K.Y.; Yoon, S. Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images. Sci. Rep. 2021, 11, 1897. [Google Scholar] [CrossRef] [PubMed]
  3. Early Treatment Diabetic Retinopathy Study Research Group. Grading Diabetic Retinopathy from Stereoscopic Color Fundus Photographs-An Extension of the Modified Airlie House Classification: ETDRS Report Number 10. Ophthalmology 1991, 98, 786–806. [Google Scholar] [CrossRef]
  4. Horton, M.B.; Brady, C.J.; Cavallerano, J.; Abramoff, M.; Barker, G.; Chiang, M.F.; Crockett, C.H.; Garg, S.; Karth, P.; Liu, Y.; et al. Practice Guidelines for Ocular Telehealth-Diabetic Retinopathy, Third Edition. Telemed. E-Health 2020, 26, 495–543. [Google Scholar] [CrossRef] [PubMed]
  5. Solomon, S.D.; Goldberg, M.F. ETDRS Grading of Diabetic Retinopathy: Still the Gold Standard? Ophthalmic Res. 2019, 62, 190–195. [Google Scholar] [CrossRef] [PubMed]
  6. Wilkinson, C.; Ferris, F.; Klein, R.; Lee, P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [Google Scholar] [CrossRef]
  7. Rajalakshmi, R.; Prathiba, V.; Arulmalar, S.; Usha, M. Review of retinal cameras for global coverage of diabetic retinopathy screening. Eye 2021, 35, 162–172. [Google Scholar] [CrossRef]
  8. Qureshi, I.; Ma, J.; Abbas, Q. Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry 2019, 11, 749. [Google Scholar] [CrossRef] [Green Version]
  9. Chandran, A.; Mathai, A. Diabetic Retinopathy for the Clinician; Jaypee Brothers: Chennai, India, 2009; Volume 1, p. 79. [Google Scholar]
  10. Ludwig, C.A.; Perera, C.; Myung, D.; Greven, M.A.; Smith, S.J.; Chang, R.T.; Leng, T. Automatic Identification of Referral-Warranted Diabetic Retinopathy Using Deep Learning on Mobile Phone Images. Transl. Vis. Sci. Technol. 2020, 9, 60. [Google Scholar] [CrossRef]
  11. Hsu, W.; Pallawala, P.M.D.S.; Lee, M.L.; Eong, K.-G.A. The role of domain knowledge in the detection of retinal hard exudates. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 2. [Google Scholar]
  12. Teo, Z.L.; Tham, Y.-C.; Yu, M.C.Y.; Chee, M.L.; Rim, T.H.; Cheung, N.; Bikbov, M.M.; Wang, Y.X.; Tang, Y.; Lu, Y.; et al. Global Prevalence of Diabetic Retinopathy and Projection of Burden through 2045. Ophthalmology 2021. [Google Scholar] [CrossRef]
  13. Derwin, D.J.; Selvi, S.T.; Singh, O.J.; Shan, P.B. A novel automated system of discriminating Microaneurysms in fundus images. Biomed. Signal Process. Control 2020, 58, 101839. [Google Scholar] [CrossRef]
  14. Sivaprasad, S.; Raman, R.; Conroy, D.; Wittenberg, R.; Rajalakshmi, R.; Majeed, A.; Krishnakumar, S.; Prevost, T.; Parameswaran, S.; Turowski, P.; et al. The ORNATE India Project: United Kingdom–India Research Collaboration to tackle visual impairment due to diabetic retinopathy. Eye 2020, 34, 1279–1286. [Google Scholar] [CrossRef] [PubMed]
  15. Majumder, S.; Elloumi, Y.; Akil, M.; Kachouri, R.; Kehtarnavaz, N. A deep learning-based smartphone app for real-time detection of five stages of diabetic retinopathy. In Real-Time Image Processing and Deep Learning; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11401, p. 1140106. [Google Scholar] [CrossRef]
  16. Bilal, A.; Sun, G.; Li, Y.; Mazhar, S.; Khan, A.Q. Diabetic Retinopathy Detection and Classification Using Mixed Models for a Disease Grading Database. IEEE Access 2021, 9, 23544–23553. [Google Scholar] [CrossRef]
  17. Qian, X.; Kang, H.; Li, R.; Lu, G.; Du, Z.; Shung, K.K.; Humayun, M.S.; Zhou, Q. In Vivo Visualization of Eye Vasculature Using Super-Resolution Ultrasound Microvessel Imaging. IEEE Trans. Biomed. Eng. 2020, 67, 2870–2880. [Google Scholar] [CrossRef]
  18. Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34. [Google Scholar] [CrossRef]
  19. Leopold, H.; Zelek, J.; Lakshminarayanan, V. Deep Learning Methods Applied to Retinal Image Analysis in Signal Processing and Machine Learning for Biomedical Big Data. Sejdic, E., Falk, T., Eds.; CRC Press: Boca Raton, FL, USA, 2018; p. 329. [Google Scholar] [CrossRef]
  20. Sengupta, S.; Singh, A.; Leopold, H.A.; Gulati, T.; Lakshminarayanan, V. Ophthalmic diagnosis using deep learning with fundus images—A critical review. Artif. Intell. Med. 2020, 102, 101758. [Google Scholar] [CrossRef] [PubMed]
  21. Leopold, H.; Sengupta, S.; Singh, A.; Lakshminarayanan, V. Deep Learning on Optical Coherence Tomography for Ophthalmology. In State-of-the-Art in Neural Networks; Elsevier: New York, NY, USA, 2021. [Google Scholar]
  22. Hormel, T.T.; Hwang, T.S.; Bailey, S.T.; Wilson, D.J.; Huang, D.; Jia, Y. Artificial intelligence in OCT angiography. Prog. Retin. Eye Res. 2021, 100965. [Google Scholar] [CrossRef] [PubMed]
  23. Methley, A.M.; Campbell, S.; Chew-Graham, C.; McNally, R.; Cheraghi-Sohi, S. PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv. Res. 2014, 14, 1–10. [Google Scholar] [CrossRef] [Green Version]
  24. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Altman, D.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  25. Khan, S.M.; Liu, X.; Nath, S.; Korot, E.; Faes, L.; Wagner, S.K.; A Keane, P.; Sebire, N.J.; Burton, M.J.; Denniston, A.K. A global review of publicly available datasets for ophthalmological imaging: Barriers to access, usability, and generalisability. Lancet Digit. Health 2021, 3, e51–e66. [Google Scholar] [CrossRef]
  26. Chetoui, M.; Akhloufi, M.A. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets. J. Med. Imaging 2020, 7, 7–25. [Google Scholar] [CrossRef]
  27. Somaraki, V.; Broadbent, D.; Coenen, F.; Harding, S. Finding Temporal Patterns in Noisy Longitudinal Data: A Study in Diabetic Retinopathy. In Advances in Data Mining. Applications and Theoretical Aspects; Springer: New York, NY, USA, 2010; Volume 6171, pp. 418–431. [Google Scholar]
  28. Zhou, Y.; Wang, B.; Huang, L.; Cui, S.; Shao, L. A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading, and Transferability. IEEE Trans. Med Imaging 2021, 40, 818–828. [Google Scholar] [CrossRef]
  29. Drive-Grand Challenge Official Website. Available online: https://drive.grand-challenge.org/ (accessed on 23 May 2021).
  30. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.; Lensu, L.; Sorri, I. DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms. Mach Vis Pattern Recognit Res Group, Lappeenranta Univ Technol Finland. 2006, pp. 1–17. Available online: http://www.siue.edu/~sumbaug/RetinalProjectPapers/DiabeticRetinopathyImageDatabaseInformation.pdf (accessed on 25 May 2021).
  31. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Raninen, A.; Voutilainen, R.; Uusitalo, H.; Kalviainen, H.; Pietila, J. The diaretdb1 diabetic retinopathy database and evaluation protocol. In Proceedings of the British Machine Vision Conference 2007, Coventry, UK, 10–13 September 2007; pp. 61–65. [Google Scholar] [CrossRef] [Green Version]
  32. Hsieh, Y.-T.; Chuang, L.-M.; Jiang, Y.-D.; Chang, T.-J.; Yang, C.-M.; Yang, C.-H.; Chan, L.-W.; Kao, T.-Y.; Chen, T.-C.; Lin, H.-C.; et al. Application of deep learning image assessment software VeriSee™ for diabetic retinopathy screening. J. Formos. Med. Assoc. 2021, 120, 165–171. [Google Scholar] [CrossRef]
  33. Giancardo, L.; Meriaudeau, F.; Karnowski, T.P.; Li, Y.; Garg, S.; Tobin, K.W.; Chaum, E. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 2012, 16, 216–226. [Google Scholar] [CrossRef] [PubMed]
  34. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M.; Mehridehnavi, A.; Javanmard, S.H. Analysis of foveal avascular zone for grading of diabetic retinopathy severity based on curvelet transform. Graefe’s Arch. Clin. Exp. Ophthalmol. 2012, 250, 1607–1614. [Google Scholar] [CrossRef]
  35. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M. Diabetic Retinopathy Grading by Digital Curvelet Transform. Comput. Math. Methods Med. 2012, 2012, 1–11. [Google Scholar] [CrossRef] [Green Version]
  36. Esmaeili, M.; Rabbani, H.; Dehnavi, A.; Dehghani, A. Automatic detection of exudates and optic disk in retinal images using curvelet transform. IET Image Process. 2012, 6, 1005–1013. [Google Scholar] [CrossRef]
  37. Prentasic, P.; Loncaric, S.; Vatavuk, Z.; Bencic, G.; Subasic, M.; Petković, T. Diabetic retinopathy image database(DRiDB): A new database for diabetic retinopathy screening programs research. In Proceedings of the 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 4–6 September 2013; pp. 711–716. [Google Scholar]
  38. Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.-C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM 2013, 34, 196–203. [Google Scholar] [CrossRef]
  39. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal vessel segmentation by improved matched filtering: Evaluation on a new high-resolution fundus image database. IET Image Process. 2013, 7, 373–383. [Google Scholar] [CrossRef]
  40. Hu, Q.; Abràmoff, M.D.; Garvin, M.K. Automated Separation of Binary Overlapping Trees in Low-Contrast Color Retinal Images. In Medical Image Computing and Computer-Assisted Intervention; Lecture Notes in Computer Science; Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8150. [Google Scholar] [CrossRef]
  41. Pires, R.; Jelinek, H.F.; Wainer, J.; Valle, E.; Rocha, A. Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images. PLoS ONE 2014, 9, e96814. [Google Scholar] [CrossRef]
  42. Sevik, U.; Köse, C.; Berber, T.; Erdöl, H. Identification of suitable fundus images using automated quality assessment methods. J. Biomed. Opt. 2014, 19, 046006. [Google Scholar] [CrossRef]
  43. Alipour, S.H.M.; Rabbani, H.; Akhlaghi, M. A new combined method based on curvelet transform and morphological operators for automatic detection of foveal avascular zone. Signal Image Video Process. 2014, 8, 205–222. [Google Scholar] [CrossRef]
  44. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordóñez-Varela, J.-R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The MESSIDOR database. Image Anal. Ster. 2014, 33, 231. [Google Scholar] [CrossRef] [Green Version]
  45. Bala, M.P.; Vijayachitra, S. Early detection and classification of microaneurysms in retinal fundus images using sequential learning methods. Int. J. Biomed. Eng. Technol. 2014, 15, 128. [Google Scholar] [CrossRef]
  46. Srinivasan, P.P.; Kim, L.; Mettu, P.S.; Cousins, S.W.; Comer, G.M.; Izatt, J.A.; Farsiu, S. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt. Express 2014, 5, 3568–3577. [Google Scholar] [CrossRef] [Green Version]
  47. Kaggle.com. Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 26 May 2021).
  48. People.duke.edu Website. Available online: http://people.duke.edu/~sf59/software.html (accessed on 26 May 2021).
  49. Holm, S.; Russell, G.; Nourrit, V.; McLoughlin, N. DR HAGIS—A fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. J. Med. Imaging 2017, 4, 014503. [Google Scholar] [CrossRef] [Green Version]
  50. Takahashi, H.; Tampo, H.; Arai, Y.; Inoue, Y.; Kawashima, H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS ONE 2017, 12, e0179790. [Google Scholar] [CrossRef] [Green Version]
  51. Rotterdam Ophthalmic Data Repository. re3data.org. Available online: https://www.re3data.org/repository/r3d (accessed on 22 June 2021).
  52. Ting, D.S.W.; Cheung, C.Y.-L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  53. Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian diabetic retinopathy image dataset (IDRiD): A database for diabetic retinopathy screening research. Data 2018, 3, 25. [Google Scholar] [CrossRef] [Green Version]
  54. Gholami, P.; Roy, P.; Parthasarathy, M.K.; Lakshminarayanan, V. OCTID: Optical coherence tomography image database. Comput. Electr. Eng. 2020, 81, 106532. [Google Scholar] [CrossRef]
  55. Abdulla, W.; Chalakkal, R.J. University of Auckland Diabetic Retinopathy (UoA-DR) Database-End User Licence Agreement. Available online: https://auckland.figshare.com/articles/journal_contribution/UoA-DR_Database_Info/5985208 (accessed on 28 May 2021).
  56. Kaggle.com. Available online: https://www.kaggle.com/c/aptos2019-blindness-detection (accessed on 23 May 2021).
  57. Ali, A.; Qadri, S.; Mashwani, W.K.; Kumam, W.; Kumam, P.; Naeem, S.; Goktas, A.; Jamal, F.; Chesneau, C.; Anam, S.; et al. Machine Learning Based Automated Segmentation and Hybrid Feature Analysis for Diabetic Retinopathy Classification Using Fundus Image. Entropy 2020, 22, 567. [Google Scholar] [CrossRef] [PubMed]
  58. Díaz, M.; Novo, J.; Cutrín, P.; Gómez-Ulla, F.; Penedo, M.G.; Ortega, M. Automatic segmentation of the foveal avascular zone in ophthalmological OCT-A images. PLoS ONE 2019, 14, e0212364. [Google Scholar] [CrossRef]
  59. ODIR-2019. Available online: https://odir2019.grand-challenge.org/ (accessed on 22 June 2021).
  60. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; Kang, H. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 2019, 501, 511–522. [Google Scholar] [CrossRef]
  61. Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [Google Scholar] [CrossRef] [Green Version]
  62. Benítez, V.E.C.; Matto, I.C.; Román, J.C.M.; Noguera, J.L.V.; García-Torres, M.; Ayala, J.; Pinto-Roa, D.P.; Gardel-Sotomayor, P.E.; Facon, J.; Grillo, S.A. Dataset from fundus images for the study of diabetic retinopathy. Data Brief. 2021, 36, 107068. [Google Scholar] [CrossRef]
  63. Wei, Q.; Li, X.; Yu, W.; Zhang, X.; Zhang, Y.; Hu, B.; Mo, B.; Gong, D.; Chen, N.; Ding, D.; et al. Learn to Segment Retinal Lesions and Beyond. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 7403–7410. [Google Scholar]
  64. Noor-Ul-Huda, M.; Tehsin, S.; Ahmed, S.; Niazi, F.A.; Murtaza, Z. Retinal images benchmark for the detection of diabetic retinopathy and clinically significant macular edema (CSME). Biomed. Tech. Eng. 2018, 64, 297–307. [Google Scholar] [CrossRef]
  65. Ohsugi, H.; Tabuchi, H.; Enno, H.; Ishitobi, N. Accuracy of deep learning, a machine-learning technology, using ultra–wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci. Rep. 2017, 7, 9425. [Google Scholar] [CrossRef]
  66. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Kafieh, R.; Rabbani, H.; Hajizadeh, F.; Ommani, M. An Accurate Multimodal 3-D Vessel Segmentation Method Based on Brightness Variations on OCT Layers and Curvelet Domain Fundus Image Analysis. IEEE Trans. Biomed. Eng. 2013, 60, 2815–2823. [Google Scholar] [CrossRef]
  68. Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  69. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
  70. Sahlsten, J.; Jaskari, J.; Kivinen, J.; Turunen, L.; Jaanio, E.; Hietala, K.; Kaski, K. Deep Learning Fundus Image Analysis for Diabetic Retinopathy and Macular Edema Grading. Sci. Rep. 2019, 9, 10750. [Google Scholar] [CrossRef] [Green Version]
  71. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Ra, I.-H.; Alazab, M. Early Detection of Diabetic Retinopathy Using PCA-Firefly Based Deep Learning Model. Electron. 2020, 9, 274. [Google Scholar] [CrossRef] [Green Version]
  72. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2017, 8, 41–57. [Google Scholar] [CrossRef] [PubMed]
  73. Paradisa, R.H.; Sarwinda, D.; Bustamam, A.; Argyadiva, T. Classification of Diabetic Retinopathy through Deep Feature Extraction and Classic Machine Learning Approach. In Proceedings of the 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 November 2020; pp. 377–381. [Google Scholar]
  74. Elswah, D.K.; Elnakib, A.A.; Moustafa, H.E.-D. Automated Diabetic Retinopathy Grading using Resnet. In Proceedings of the National Radio Science Conference, NRSC, Cairo, Egypt, 8–10 September 2020; pp. 248–254. [Google Scholar]
  75. Sandhu, H.S.; Elmogy, M.; Sharafeldeen, A.; Elsharkawy, M.; El-Adawy, N.; Eltanboly, A.; Shalaby, A.; Keynton, R.; El-Baz, A. Automated Diagnosis of Diabetic Retinopathy Using Clinical Biomarkers, Optical Coherence Tomography, and Optical Coherence Tomography Angiography. Am. J. Ophthalmol. 2020, 216, 201–206. [Google Scholar] [CrossRef] [PubMed]
  76. Somasundaram, S.K.; Alli, P. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J. Med. Syst. 2017, 41, 201. [Google Scholar]
  77. Liu, Z.; Wang, C.; Cai, X.; Jiang, H.; Wang, J. Discrimination of Diabetic Retinopathy From Optical Coherence Tomography Angiography Images Using Machine Learning Methods. IEEE Access 2021, 9, 51689–51694. [Google Scholar] [CrossRef]
  78. Levenkova, A.; Kalloniatis, M.; Ly, A.; Ho, A.; Sowmya, A. Lesion detection in ultra-wide field retinal images for diabetic retinopathy diagnosis. In Medical Imaging 2018: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10575, p. 1057531. [Google Scholar]
  79. Rajalakshmi, R.; Subashini, R.; Anjana, R.M.; Mohan, V. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye 2018, 32, 1138–1144. [Google Scholar] [CrossRef]
  80. Riaz, H.; Park, J.; Choi, H.; Kim, H.; Kim, J. Deep and Densely Connected Networks for Classification of Diabetic Retinopathy. Diagnostics 2020, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  81. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
  82. Sayres, R.; Taly, A.; Rahimy, E.; Blumer, K.; Coz, D.; Hammel, N.; Krause, J.; Narayanaswamy, A.; Rastegar, Z.; Wu, D.; et al. Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy. Ophthalmology 2019, 126, 552–564. [Google Scholar] [CrossRef] [Green Version]
  83. Hua, C.-H.; Huynh-The, T.; Kim, K.; Yu, S.-Y.; Le-Tien, T.; Park, G.H.; Bang, J.; Khan, W.A.; Bae, S.-H.; Lee, S. Bimodal learning via trilogy of skip-connection deep networks for diabetic retinopathy risk progression identification. Int. J. Med. Inform. 2019, 132, 103926. [Google Scholar] [CrossRef] [PubMed]
  84. Pao, S.-I.; Lin, H.-Z.; Chien, K.-H.; Tai, M.-C.; Chen, J.-T.; Lin, G.-M. Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J. Ophthalmol. 2020, 2020, 1–7. [Google Scholar] [CrossRef] [PubMed]
  85. Shankar, K.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [Google Scholar] [CrossRef]
  86. Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
  87. Singh, A.; Sengupta, S.; Rasheed, M.A.; Jayakumar, V.; Lakshminarayanan, V. Uncertainty aware and explainable diagnosis of retinal disease. In Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 2021; Volume 11601, p. 116010J. [Google Scholar]
  88. Singh, A.; Jothi Balaji, J.; Rasheed, M.A.; Jayakumar, V.; Raman, R.; Lakshminarayanan, V. Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis. Clin. Ophthalmol. 2021, 15, 2573–2581. [Google Scholar] [CrossRef]
  89. Keel, S.; Wu, J.; Lee, P.Y.; Scheetz, J.; He, M. Visualizing Deep Learning Models for the Detection of Referable Diabetic Retinopathy and Glaucoma. JAMA Ophthalmol. 2019, 137, 288–292. [Google Scholar] [CrossRef]
  90. Chandrakumar, T.; Kathirvel, R. Classifying Diabetic Retinopathy using Deep Learning Architecture. Int. J. Eng. Res. 2016, 5, 19–24. [Google Scholar] [CrossRef]
  91. Colas, E.; Besse, A.; Orgogozo, A.; Schmauch, B.; Meric, N. Deep learning approach for diabetic retinopathy screening. Acta Ophthalmol. 2016, 94. [Google Scholar] [CrossRef]
  92. Wong, T.Y.; Bressler, N.M. Artificial Intelligence With Deep Learning Technology Looks Into Diabetic Retinopathy Screening. JAMA 2016, 316, 2366–2367. [Google Scholar] [CrossRef] [PubMed]
  93. Wang, Z.; Yin, Y.; Shi, J.; Fang, W.; Li, H.; Wang, X. Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection. In Proceedings of the Transactions on Petri Nets and Other Models of Concurrency XV, Quebec City, QC, Canada, 11–13 September 2017; pp. 267–275. [Google Scholar]
  94. Benson, J.; Maynard, J.; Zamora, G.; Carrillo, H.; Wigdahl, J.; Nemeth, S.; Barriga, S.; Estrada, T.; Soliz, P. Transfer learning for diabetic retinopathy. Image Process. 2018, 70, 105741Z. [Google Scholar]
  95. Chakrabarty, N. A Deep Learning Method for the detection of Diabetic Retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. [Google Scholar]
  96. Costa, P.; Galdran, A.; Smailagic, A.; Campilho, A. A Weakly-Supervised Framework for Interpretable Diabetic Retinopathy Detection on Retinal Images. IEEE Access 2018, 6, 18747–18758. [Google Scholar] [CrossRef]
  97. Dai, L.; Fang, R.; Li, H.; Hou, X.; Sheng, B.; Wu, Q.; Jia, W. Clinical Report Guided Retinal Microaneurysm Detection With Multi-Sieving Deep Learning. IEEE Trans. Med. Imaging 2018, 37, 1149–1161. [Google Scholar] [CrossRef] [PubMed]
  98. Dutta, S.; Manideep, B.C.; Basha, S.M.; Caytiles, R.D.; Iyengar, N.C.S.N. Classification of Diabetic Retinopathy Images by Using Deep Learning Models. Int. J. Grid Distrib. Comput. 2018, 11, 99–106. [Google Scholar] [CrossRef]
  99. Kwasigroch, A.; Jarzembinski, B.; Grochowski, M. Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnoujście, Poland, 9–12 May 2018; pp. 111–116. [Google Scholar] [CrossRef]
  100. Islam, M.R.; Hasan, M.A.M.; Sayeed, A. Transfer Learning based Diabetic Retinopathy Detection with a Novel Preprocessed Layer. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 888–891. [Google Scholar]
  101. Zhang, S.; Wu, H.; Wang, X.; Cao, L.; Schwartz, J.; Hernandez, J.; Rodríguez, G.; Liu, B.J.; Murthy, V. The application of deep learning for diabetic retinopathy prescreening in research eye-PACS. Imaging Inform. Healthc. Res. Appl. 2018, 10579, 1057913. [Google Scholar] [CrossRef]
  102. Fang, M.; Zhang, X.; Zhang, W.; Xue, J.; Wu, L. Automatic classification of diabetic retinopathy based on convolutional neural networks. In Proceedings of the 2018 International Conference on Image and Video Processing, and Artificial Intelligence, Shanghai, China, 15–17 August 2018; Volume 10836, p. 1083608. [Google Scholar]
  103. Arcadu, F.; Benmansour, F.; Maunz, A.; Willis, J.; Haskova, Z.; Prunotto, M. Deep learning algorithm predicts diabetic retinopathy progression in individual patients. NPJ Digit. Med. 2019, 2, 92. [Google Scholar] [CrossRef]
  104. Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.T.; Hamzah, H.; Ho, J.; Lee, X.Q.; Hsu, W.; et al. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [Google Scholar] [CrossRef] [Green Version]
  105. Chowdhury, M.M.H.; Meem, N.T.A. A Machine Learning Approach to Detect Diabetic Retinopathy Using Convolutional Neural Network; Springer: Singapore, 2019; pp. 255–264. [Google Scholar]
  106. Govindaraj, V.; Balaji, M.; Mohideen, T.A.; Mohideen, S.A.F.J. Eminent identification and classification of Diabetic Retinopathy in clinical fundus images using Probabilistic Neural Network. In Proceedings of the 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS), Tamilnadu, India, 11–13 April 2019; pp. 1–6. [Google Scholar]
  107. Gulshan, V.; Rajan, R.; Widner, K.; Wu, D.; Wubbels, P.; Rhodes, T.; Whitehouse, K.; Coram, M.; Corrado, G.; Ramasamy, K.; et al. Performance of a Deep-Learning Algorithm vs Manual Grading for Detecting Diabetic Retinopathy in India. JAMA Ophthalmol. 2019, 137, 987–993. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  108. Hathwar, S.B.; Srinivasa, G. Automated Grading of Diabetic Retinopathy in Retinal Fundus Images using Deep Learning. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 September 2019; pp. 73–77. [Google Scholar]
  109. He, J.; Shen, L.; Ai, X.; Li, X. Diabetic Retinopathy Grade and Macular Edema Risk Classification Using Convolutional Neural Networks. In Proceedings of the 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS), Shenyang, China, 12–14 July 2019; pp. 463–466. [Google Scholar]
  110. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC, Berlin, Germany, 23–27 July 2019; 2019, pp. 2045–2048. [Google Scholar]
  111. Li, X.; Hu, X.; Yu, L.; Zhu, L.; Fu, C.-W.; Heng, P.-A. CANet: Cross-Disease Attention Network for Joint Diabetic Retinopathy and Diabetic Macular Edema Grading. IEEE Trans. Med. Imaging 2020, 39, 1483–1493. [Google Scholar] [CrossRef] [Green Version]
  112. Metan, A.C.; Lambert, A.; Pickering, M. Small Scale Feature Propagation Using Deep Residual Learning for Diabetic Reti-nopathy Classification. In Proceedings of the 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), Xiamen, China, 5–7 July 2019; pp. 392–396. [Google Scholar]
  113. Nagasawa, T.; Tabuchi, H.; Masumoto, H.; Enno, H.; Niki, M.; Ohara, Z.; Yoshizumi, Y.; Ohsugi, H.; Mitamura, Y. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int. Ophthalmol. 2019, 39, 2153–2159. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Khan, I.A.; Jadoon, W. A Deep Learning Ensemble Approach for Diabetic Reti-nopathy Detection. IEEE Access 2019, 7, 150530–150539. [Google Scholar] [CrossRef]
  115. Bora, A.; Balasubramanian, S.; Babenko, B.; Virmani, S.; Venugopalan, S.; Mitani, A.; Marinho, G.D.O.; Cuadros, J.; Ruamviboonsuk, P.; Corrado, G.S.; et al. Predicting the risk of developing diabetic retinopathy using deep learning. Lancet Digit. Health 2021, 3, e10–e19. [Google Scholar] [CrossRef]
  116. Sengupta, S.; Singh, A.; Zelek, J.; Lakshminarayanan, V. Cross-domain diabetic retinopathy detection using deep learning. Appl. Mach. Learn. 2019, 11139, 111390V. [Google Scholar] [CrossRef]
  117. Ting, D.S.W.; Cheung, C.Y.; Nguyen, Q.; Sabanayagam, C.; Lim, G.; Lim, Z.W.; Tan, G.S.W.; Soh, Y.Q.; Schmetterer, L.; Wang, Y.X.; et al. Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy: A multi-ethnic study. npj Digit. Med. 2019, 2, 24. [Google Scholar] [CrossRef] [Green Version]
  118. Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 7, 30744–30753. [Google Scholar] [CrossRef]
  119. Araújo, T.; Aresta, G.; Mendonça, L.; Penas, S.; Maia, C.; Carneiro, Â.; Mendonça, A.M.; Campilho, A. DR|GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images. Med. Image Anal. 2020, 63, 101715. [Google Scholar] [CrossRef] [PubMed]
  120. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R. Deep neural networks to predict diabetic reti-nopathy. J. Ambient. Intell. Hum. Comput. 2020. [Google Scholar] [CrossRef]
  121. Gayathri, S.; Krishna, A.K.; Gopi, V.P.; Palanisamy, P. Automated Binary and Multiclass Classification of Diabetic Retinopathy Using Haralick and Multiresolution Features. IEEE Access 2020, 8, 57497–57504. [Google Scholar] [CrossRef]
  122. Jiang, H.; Xu, J.; Shi, R.; Yang, K.; Zhang, D.; Gao, M.; Ma, H.; Qian, W. A Multi-Label Deep Learning Model with Interpretable Grad-CAM for Diabetic Retinopathy Classification. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; Volume 2020, pp. 1560–1563. [Google Scholar]
  123. Lands, A.; Kottarathil, A.J.; Biju, A.; Jacob, E.M.; Thomas, S. Implementation of deep learning based algorithms for diabetic retinopathy classification from fundus images. In Proceedings of the 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184), Tirunelveli, India, 15–17 June 2020; pp. 1028–1032. [Google Scholar]
  124. Memari, N.; Abdollahi, S.; Ganzagh, M.M.; Moghbel, M. Computer-assisted diagnosis (CAD) system for Diabetic Retinopathy screening using color fundus images using Deep learning. In Proceedings of the 2020 IEEE Student Conference on Research and Development (SCOReD), Batu Pahat, Malaysia, 27–29 September 2020; pp. 69–73. [Google Scholar]
  125. Narayanan, B.N.; Hardie, R.C.; De Silva, M.S.; Kueterman, N.K. Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy. J. Med. Imaging 2020, 7, 034501. [Google Scholar] [CrossRef]
  126. Patel, R.; Chaware, A. Transfer Learning with Fine-Tuned MobileNetV2 for Diabetic Retinopathy. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–4. [Google Scholar]
  127. Samanta, A.; Saha, A.; Satapathy, S.C.; Fernandes, S.L.; Zhang, Y.-D. Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. Pattern Recognit. Lett. 2020, 135, 293–298. [Google Scholar] [CrossRef]
  128. Serener, A.; Serte, S. Geographic variation and ethnicity in diabetic retinopathy detection via deep learning. Turkish J. Electr. Eng. Comput. Sci. 2020, 28, 664–678. [Google Scholar] [CrossRef]
  129. Shaban, M.; Ogur, Z.; Mahmoud, A.; Switala, A.; Shalaby, A.; Abu Khalifeh, H.; Ghazal, M.; Fraiwan, L.; Giridharan, G.; Sandhu, H.; et al. A convolutional neural network for the screening and staging of diabetic retinopathy. PLoS ONE 2020, 15, e0233514. [Google Scholar] [CrossRef]
  130. Singh, R.K.; Gorantla, R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar] [CrossRef] [Green Version]
  131. Thota, N.B.; Reddy, D.U. Improving the Accuracy of Diabetic Retinopathy Severity Classification with Transfer Learning. In Proceedings of the 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August 2020; pp. 1003–1006. [Google Scholar] [CrossRef]
  132. Wang, X.-N.; Dai, L.; Li, S.-T.; Kong, H.-Y.; Sheng, B.; Wu, Q. Automatic Grading System for Diabetic Retinopathy Diagnosis Using Deep Learning Artificial Intelligence Software. Curr. Eye Res. 2020, 45, 1550–1555. [Google Scholar] [CrossRef]
  133. Wang, J.; Bai, Y.; Xia, B. Simultaneous Diagnosis of Severity and Features of Diabetic Retinopathy in Fundus Photography Using Deep Learning. IEEE J. Biomed. Health Inform. 2020, 24, 3397–3407. [Google Scholar] [CrossRef] [PubMed]
  134. Zhang, W.; Zhao, X.J.; Chen, Y.; Zhong, J.; Yi, Z. DeepUWF: An Automated Ultra-Wide-Field Fundus Screening System via Deep Learning. IEEE J. Biomed. Health Inform. 2021, 25, 2988–2996. [Google Scholar] [CrossRef] [PubMed]
  135. Abdelmaksoud, E.; El-Sappagh, S.; Barakat, S.; AbuHmed, T.; Elmogy, M. Automatic Diabetic Retinopathy Grading System Based on Detecting Multiple Retinal Lesions. IEEE Access 2021, 9, 15939–15960. [Google Scholar] [CrossRef]
  136. Gangwar, A.K.; Ravi, V. Diabetic Retinopathy Detection Using Transfer Learning and Deep Learning. In Evolution in Computational Intelligence; Springer: Singapore, 2020; pp. 679–689. [Google Scholar] [CrossRef]
  137. He, A.; Li, T.; Li, N.; Wang, K.; Fu, H. CABNet: Category Attention Block for Imbalanced Diabetic Retinopathy Grading. IEEE Trans. Med. Imaging 2021, 40, 143–153. [Google Scholar] [CrossRef]
  138. Khan, Z.; Khan, F.G.; Khan, A.; Rehman, Z.U.; Shah, S.; Qummar, S.; Ali, F.; Pack, S. Diabetic Retinopathy Detection Using VGG-NIN a Deep Learning Architecture. IEEE Access 2021, 9, 61408–61416. [Google Scholar] [CrossRef]
  139. Saeed, F.; Hussain, M.; Aboalsamh, H.A. Automatic Diabetic Retinopathy Diagnosis Using Adaptive Fine-Tuned Convolu-tional Neural Network. IEEE Access 2021, 9, 41344–44359. [Google Scholar] [CrossRef]
  140. Wang, Y.; Yu, M.; Hu, B.; Jin, X.; Li, Y.; Zhang, X.; Zhang, Y.; Gong, D.; Wu, C.; Zhang, B.; et al. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes/Metab. Res. Rev. 2021, 37, 3445. [Google Scholar] [CrossRef]
  141. Wang, S.; Wang, X.; Hu, Y.; Shen, Y.; Yang, Z.; Gen, M.; Lei, B. Diabetic Retinopathy Diagnosis Using Multichannel Generative Adver-sarial Network with Semisupervision. IEEE Trans. Autom. Sci. Eng. 2021, 18, 574–585. [Google Scholar] [CrossRef]
  142. Datta, N.S.; Dutta, H.S.; Majumder, K. Brightness-preserving fuzzy contrast enhancement scheme for the detection and clas-sification of diabetic retinopathy disease. J. Med. Imaging 2016, 3, 014502. [Google Scholar] [CrossRef] [Green Version]
  143. Lin, G.-M.; Chen, M.-J.; Yeh, C.-H.; Lin, Y.-Y.; Kuo, H.-Y.; Lin, M.-H.; Chen, M.-C.; Lin, S.D.; Gao, Y.; Ran, A.; et al. Transforming Retinal Photographs to Entropy Images in Deep Learning to Improve Automated Detection for Diabetic Retinopathy. J. Ophthalmol. 2018, 2018, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  144. Panigrahi, P.K.; Mukhopadhyay, S.; Pratiher, S.; Chhablani, J.; Mukherjee, S.; Barman, R.; Pasupuleti, G. Statistical classifiers on local binary patterns for optical diagnosis of diabetic retinopathy. Nanophotonics 2018, 10685, 106852Y. [Google Scholar] [CrossRef]
  145. Pour, A.M.; Seyedarabi, H.; Jahromi, S.H.A.; Javadzadeh, A. Automatic Detection and Monitoring of Diabetic Retinopathy Using Efficient Convolutional Neural Networks and Contrast Limited Adaptive Histogram Equalization. IEEE Access 2020, 8, 136668–136673. [Google Scholar] [CrossRef]
  146. Ramchandre, S.; Patil, B.; Pharande, S.; Javali, K.; Pande, H. A Deep Learning Approach for Diabetic Retinopathy detection using Transfer Learning. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; pp. 1–5. [Google Scholar]
  147. Bhardwaj, C.; Jain, S.; Sood, M. Deep Learning–Based Diabetic Retinopathy Severity Grading System Employing Quadrant Ensemble Model. J. Digit. Imaging 2021. [Google Scholar] [CrossRef]
  148. Elloumi, Y.; Ben Mbarek, M.; Boukadida, R.; Akil, M.; Bedoui, M.H. Fast and accurate mobile-aided screening system of moderate diabetic retinopathy. In Proceedings of the Thirteenth International Conference on Machine Vision, Rome, Italy, 2–6 November 2020; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2021; Volume 11605, p. 116050U. [Google Scholar]
  149. Eladawi, N.; Elmogy, M.; Fraiwan, L.; Pichi, F.; Ghazal, M.; Aboelfetouh, A.; Riad, A.; Keynton, R.; Schaal, S.; El-Baz, A. Early Diagnosis of Diabetic Retinopathy in OCTA Images Based on Local Analysis of Retinal Blood Vessels and Foveal Avascular Zone. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3886–3891. [Google Scholar]
  150. Islam, K.T.; Wijewickrema, S.; O’Leary, S. Identifying Diabetic Retinopathy from OCT Images using Deep Transfer Learning with Artificial Neural Networks. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 281–286. [Google Scholar] [CrossRef]
  151. Le, D.; Alam, M.N.; Lim, J.I.; Chan, R.P.; Yao, X. Deep learning for objective OCTA detection of diabetic retinopathy. Ophthalmic Technol. 2020, 11218, 112181P. [Google Scholar] [CrossRef]
  152. Singh, A.; Sengupta, S.; Mohammed, A.R.; Faruq, I.; Jayakumar, V.; Zelek, J.; Lakshminarayanan, V. What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification? In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2020; pp. 21–31. [Google Scholar]
  153. Singh, A.; Mohammed, A.R.; Zelek, J.; Lakshminarayanan, V. Interpretation of deep learning using attributions: Application to ophthalmic diagnosis. Appl. Mach. Learn. 2020, 11511, 115110A. [Google Scholar] [CrossRef]
  154. Shah, S.A.A.; Laude, A.; Faye, I.; Tang, T.B. Automated microaneurysm detection in diabetic retinopathy using curvelet transform. J. Biomed. Opt. 2016, 21, 101404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  155. Huang, H.; Ma, H.; van Triest, H.J.W.; Wei, Y.; Qian, W. Automatic detection of neovascularization in retinal images using extreme learning machine. Neurocomputing 2018, 277, 218–227. [Google Scholar] [CrossRef]
  156. Kaur, J.; Mittal, D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern. Biomed. Eng. 2018, 38, 27–53. [Google Scholar] [CrossRef]
  157. Imani, E.; Pourreza, H.-R. A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 2016, 133, 195–205. [Google Scholar] [CrossRef]
  158. Holmberg, O.; Köhler, N.D.; Martins, T.; Siedlecki, J.; Herold, T.; Keidel, L.; Asani, B.; Schiefelbein, J.; Priglinger, S.; Kortuem, K.U.; et al. Self-supervised retinal thickness prediction enables deep learning from unlabelled data to boost classification of diabetic retinopathy. Nat. Mach. Intell. 2020, 2, 719–726. [Google Scholar] [CrossRef]
  159. Guo, Y.; Camino, A.; Wang, J.; Huang, D.; Hwang, T.; Jia, Y. MEDnet, a neural network for automated detection of avascular area in OCT angiography. Biomed. Opt. Express 2018, 9, 5147–5158. [Google Scholar] [CrossRef]
  160. Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal Lesion Detection With Deep Learning Using Image Patches. Investig. Opthalmology Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
  161. Benzamin, A.; Chakraborty, C. Detection of hard exudates in retinal fundus images using deep learning. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 25–29 June 2018; pp. 465–469. [Google Scholar]
  162. Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M. An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 2018, 153, 115–127. [Google Scholar] [CrossRef] [Green Version]
  163. Eftekhari, N.; Pourreza, H.-R.; Masoudi, M.; Ghiasi-Shirazi, K.; Saeedi, E. Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomed. Eng. Online 2019, 18, 1–16. [Google Scholar] [CrossRef] [Green Version]
  164. Wu, Q.; Cheddad, A. Segmentation-based Deep Learning Fundus Image Analysis. In Proceedings of the 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 6–9 November 2019; pp. 1–5. [Google Scholar]
  165. Yan, Z.; Han, X.; Wang, C.; Qiu, Y.; Xiong, Z.; Cui, S. Learning Mutually Local-Global U-Nets For High-Resolution Retinal Lesion Segmentation In Fundus Images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 597–600. [Google Scholar] [CrossRef] [Green Version]
  166. Qiao, L.; Zhu, Y.; Zhou, H. Diabetic Retinopathy Detection Using Prognosis of Microaneurysm and Early Diagnosis System for Non-Proliferative Diabetic Retinopathy Based on Deep Learning Algorithms. IEEE Access 2020, 8, 104292–104302. [Google Scholar] [CrossRef]
  167. Xu, Y.; Zhou, Z.; Li, X.; Zhang, N.; Zhang, M.; Wei, P. FFU-Net: Feature Fusion U-Net for Lesion Segmentation of Diabetic Ret-inopathy. Biomed. Res. Int. 2021, 2021, 6644071. [Google Scholar] [PubMed]
  168. ElTanboly, A.H.; Palacio, A.; Shalaby, A.M.; Switala, A.E.; Helmy, O.; Schaal, S.; El-Baz, A. An automated approach for early detection of diabetic retinopathy using SD-OCT images. Front. Biosci. Elit. 2018, 10, 197–207. [Google Scholar]
  169. Sandhu, H.S.; Eltanboly, A.; Shalaby, A.; Keynton, R.S.; Schaal, S.; El-Baz, A. Automated diagnosis and grading of diabetic reti-nopathy using optical coherence tomography. Investig. Ophthalmol. Vis. Sci. 2018, 59, 3155–3160. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Retinal fundus images of different stages of diabetic retinopathy. (A) Stage II: Mild non-proliferative diabetic retinopathy; (B) Stage III: Moderate non-proliferative diabetic retinopathy; (C) Stage IV: Severe non-proliferative diabetic retinopathy; (D) Stage V: Proliferative diabetic retinopathy (images courtesy of Rajiv Raman et al., Sankara Nethralaya, India).
Figure 1. Retinal fundus images of different stages of diabetic retinopathy. (A) Stage II: Mild non-proliferative diabetic retinopathy; (B) Stage III: Moderate non-proliferative diabetic retinopathy; (C) Stage IV: Severe non-proliferative diabetic retinopathy; (D) Stage V: Proliferative diabetic retinopathy (images courtesy of Rajiv Raman et al., Sankara Nethralaya, India).
Jimaging 07 00165 g001
Figure 2. Increase in the number of articles matching the predefined keywords over the last 6 years; the PubMed search results were used to create this figure.
Figure 2. Increase in the number of articles matching the predefined keywords over the last 6 years; the PubMed search results were used to create this figure.
Jimaging 07 00165 g002
Figure 3. A plot of the number of articles as a function of year. This figure was generated using results from all 5 databases using search terms Diabetic Retinopathy AND (“Deep Learning” OR “Machine Learning”).
Figure 3. A plot of the number of articles as a function of year. This figure was generated using results from all 5 databases using search terms Diabetic Retinopathy AND (“Deep Learning” OR “Machine Learning”).
Jimaging 07 00165 g003
Figure 4. Flowchart summarizing the literature search and dataset identification using PRISMA review strategy for identifying articles related to automated detection and diagnosis of diabetic retinopathy.
Figure 4. Flowchart summarizing the literature search and dataset identification using PRISMA review strategy for identifying articles related to automated detection and diagnosis of diabetic retinopathy.
Jimaging 07 00165 g004
Table 1. Datasets for DR detection, grading and segmentation and their respective characteristics.
Table 1. Datasets for DR detection, grading and segmentation and their respective characteristics.
DatasetNo. of ImageDevice UsedAccessCountryYearNo. of SubjectsTypeFormatRemarks
DRIVE [29]40Canon CR5 non-mydriatic 3CCD camera with a 45° FOVOANetherlands2004400FundusJPEGRetinal vessel segmentation and ophthalmic diseases
DIARETDB0 [30]13050° FOV DFCOAFinland2006NRFundusPNGDR detection and grading
DIARETDB1 [31]8950° FOV DFCOAFinland2007NRFundusPNGDR detection and grading
National Taiwan University Hospital [32]30Heidelberg retina tomography with Rostock corneal moduleOAJapan2007–201730FundusTIFFDR, pseudo exfoliation
HEI-MED [33]169Visucam PRO fundus camera (Zeiss, Germany)OAUSA2010910FundusJPEGDR detection and grading
19 CF [34]60NROAIran201260FundusJPEGDR detection
FFA Photographs & CF [35]120NROAIran201260FFAJPEGDR grading and lesion detection
Fundus Images with Exudates [36]35NROAIran2012NRFundusJPEGLesion detection
DRiDB [37]50Zeiss VISUCAM 200 DFC at a 45° FOVAURCroatia2013NRFundusBMP filesDR grading
eOphtha [38]463NROAFrance2013NRFundusJPEGLesion detection
Longitudinal DR screening data [27]1120Topcon TRC-NW65 with a 45 degrees field of viewOANetherlands201370FundusJPEGDR grading
22 HRF [39]45CF-60UVi camera (Canon)OAGermany and Czech Republic201345FundusJPEGDR detection
RITE [40]40Canon CR5 non-mydriatic 3CCD camera with a 45° FOVAURNetherlands2013Same As DriveFundusTIFFRetinal vessel segmentation and ophthalmic diseases
DR1 [41]1077TRC-50× mydriatic camera TopconOABrazil2014NRFundusTIFFDR detection
DR2 [41]520TRC-NW8 retinography (Topcon) with a D90 camera (Nikon, Japan)OABrazil2014NRFundusTIFFDR detection
DRIMDB [42]216CF-60UVi fundus camera (Canon)OATurkey2014NRFundusJPEGDR detection and grading
FFA Photographs [43]70NROAIran201470FFAJPEGDR grading and Lesion detection
MESSIDOR 1 [44]1200Topcon TRC NW6 non-mydriatic retinography, 45° FOVOAFrance2014NRFundusTIFFDR and DME grading
Lotus eyecare hospital [45]122Canon non-mydriatic Zeiss fundus camera 90° FOVNOAIndia2014NRFundusJPEGDR detection
Srinivasan [46]3231SD-OCT (Heidelberg Engineering, Germany)OAUSA201445OCTTIFFDR detection and grading, DME, AMD
EyePACS [47]88,702Centervue DRS (Centervue, Italy), Optovue iCam (Optovue, USA), Canon CR1/DGi/CR2 (Canon), and Topcon NW (Topcon)OAUSA2015NRFundusJPEGDR grading
Rabbani [48]24 images & 24 videosHeidelberg SPECTRALIS OCT HRA systemOAUSA201524OCTTIFFDiabetic Eye diseases
DR HAGIS [49]39TRC-NW6s (Topcon), TRC-NW8 (Topcon), or CR-DGi fundus camera (Canon)OAUK201638FundusJPEGDR, HT, AMD and Glaucoma
JICHI DR [50]9939AFC-230 fundus camera (Nidek)OAJapan20172740FundusJPEGDR grading
Rotterdam Ophthalmic Data Repository DR [51]1120TRC-NW65 non-mydriatic DFC (Topcon)OANetherlands201770FundusPNGDR detection
Singapore National DR Screening Program [52]494,661NRNOASingapore201714,880FundusJPEGDR, Glaucoma and AMD
IDRID [53]516NROAIndia2018NRFundusJPEGDR grading and lesion segmentation
OCTID [54]500+Cirrus HD-OCT machine (Carl Zeiss Mediatec)OAMulti ethnic2018NROCTJPEGDR, HT, AMD
UoA-DR [55]200Zeiss VISUCAM 500 Fundus Camera FOV 45°AURIndia2018NRFundusJPEGDR grading
APTOS [56]5590DFCOAIndia2019NRFundusPNGDR grading
CSME [57]1445NIDEK non-mydriatic AFC-330 auto-fundus cameraNOAPakistan2019NRFundusJPEGDR grading
OCTAGON [58]213DRI OCT Triton (Topcon)AURSpain2019213OCTAJPEG & TIFFDR detection
ODIR-2019 [59]8000Fundus camera (Canon), Fundus camera (ZEISS), and Fundus camera (Kowa)OAChina20195000FundusJPEGDR, HT, AMD and Glaucoma
OIA-DDR [60]13,673NROAChina20199598NRJPEGDR grading and lesion segmentation
Zhongshan Hospital and First People’s Hospital [61]19,233Multiple colour fundus cameraNOAChina20195278FundusJPEGDR grading and lesion segmentation
AGAR300 [62]30045° FOVOAIndia2020150FundusJPEGDR grading and MA detection
Bahawal Victoria Hospital [57]2500Vision Star, 24.1 Megapixel Nikon D5200 cameraNOAPakistan2020500FundusJPEGDR grading
Retinal Lesions [63]1593Selected from EPACS datasetAURChina2020NRFundusJPEGDR grading and lesion segmentation
Dataset of fundus images for the study of DR [64]757Visucam 500 camera of the Zeiss brandOAParaguay2021NRFundusJPEGDR grading
FGADR [60]2842NROAUAE2021NRFundusJPEGDR and DME grading
Optos Dataset (Tsukazaki Hospital) [65]13,047200 Tx ultra-wide-field device (Optos, UK)NOAJapanNR5389FundusJPEGDR, Glaucoma, AMD, and other eye diseases
MESSIDOR 2 [66]1748Topcon TRC NW6 non-mydriatic retinography 45° FOVAURFranceNR874FundusTIFFDR and DME grading
Noor hospital [67]4142Heidelberg SPECTRALIS SD-OCT imaging systemNOAIranNR148OCTTIFFDR detection
DFC: Digital fundus camera, RFC: Retinal Fundus Camera, FFA: Fundus Fluorescein Angiogram, DR: Diabetic retinopathy, MA: Microaneurysms, DME: Diabetic Macular Edema, FOV: field-of-view, AMD: Age-related Macular Degeneration; OA: Open access AUR: Access upon request; NOA: Not Open access; CF: Colour Fundus; HT: Hypertension; NR: Not Represented.
Table 2. Classification-based studies in DR detection using fundus imaging.
Table 2. Classification-based studies in DR detection using fundus imaging.
Author, YearDatasetGrading DetailsPre-ProcessingMethodAccuracySensitivitySpecificityAUC
Abràmoff, 2016 [66]MESSIDOR 2Detect RDR and vtDRNoDCNN: IDx-DR X2.1.
ML: RF
NA96.80%87.00%0.98
Chandrakumar, 2016 [90]EyePACS, DRIVE, STAREGrade DR based on ICDR scaleYesDCNNSTARE and DRIVE: 94%NANANA
Colas, 2016 [91]EyePACSGrade DR based on ICDR scaleNoDCNNNA96.20%66.60%0.94
Gulshan, 2016 [69]EyePACS, MESSIDOR 2Detect DR based on ICDR scale, RDR and referable DMEYesDCNNNAEyePACS: 97.5%EyePACS: 93.4%EyePACS: 0.99
Wong, 2016 [92]EYEPACS, MESSIDOR 2Detect RDR, Referable DME (RDME)NoDCNNNA90%98%0.99
Gargeya, 2017 [68]EyePACS, MESSIDOR 2, eOphthaDetect DR or non-DRYesDCNNNAEyePACS: 94%EyePACS: 98%EyePACS: 0.97
Somasundaram, 2017 [76]DIARETDB1Detect PDR, NPDRNoML: t-SNE and ML-BECNANANANA
Takahashi, 2017 [50]Jichi Medical UniversityGrade DR with the Davis grading scale (NPDR, severe DR, PDR)NoDCNN: Modified GoogLeNet81%NANANA
Ting, 2017 [52]SiDRPDetect RDR, vtDR, glaucoma, AMDNoDCNNNARDR: 90.5%
vtDR: 100%
RDR: 91.6%
vtDR: 91.1%
RDR: 0.93
vtDR: 0.95
Quellec, 2017 [81]EyePACS, eOphta, DIARETDB 1Grade DR based on ICDRYesDCNN: L2-regularized o-O DCNNNA94.60%77%0.955
Wang, 2017 [93]EyePACS, MESSIDOR 1Grade DR based on ICDR scaleYesWeakly supervised network to classify image and extract high resolution image patches containing a lesionMESSIDOR 1: RDR: 91.1%NANAMESSIDOR 1: RDR: 0.957
Benson, 2018 [94]Vision Quest Biomedical databaseGrade DR based on ICDR scale + scars detectionYesDCNN: Inception v3NA90%90%0.95.
Chakrabarty, 2018 [95]High-Resolution Fundus (HRF) ImagesDetect DRYesDCNN91.67%100%100%F1 score: 1
Costa, 2018 [96]MESSIDOR 1Grade DR based on ICDR scaleNoMultiple Instance Learning (MIL)NANANA0.9
Dai, 2018 [97]DIARETDB1MA, HE, CWS, Ex detectionYesDCNN: Multi-sieving CNN(image to text mapping)96.10%87.80%99.70%F1 score: 0.93
Dutta, 2018 [98]EyePACSMild NPDR, Moderate NPDR, Severe NPDR, PDRYesDCNN: VGGNet86.30%NANANA
Kwasigroch, 2018 [99]EyePACSGrade DR based on ICDR scaleYesDCNN: VGG D81.70%89.50%50.50%NA
Levenkova, 2018 [78]UWF (Ultra-Wide Field)Detect CWS, MA, HE, ExNoDCNN, SVMNANANA0.80
Mansour, 2018 [72]EyePACSGrade DR based on ICDR scaleYesDCNN, ML: AlexNet, LDA, PCA, SVM, SIFT97.93%100%0.93NA
Rajalakshmi, 2018 [7]Smartphone-based imaging deviceDetect DR and vtDR
Grade DR based on ICDR scale
NoDCNNNADR: 95.8%
vtDR: 99.1%
DR: 80.2%
vtDR: 80.4%
NA
Robiul Islam, 2018 [100]APTOS 2019Grade DR based on ICDR scaleYesDCNN: VGG1691.32%NANANA
Zhang, 2018 [101]EyePACSGrade DR based on ICDR scaleYesDCNN: Resnet-50NA61%84%0.83
Zhang, 2018 [102]EyePACSGrade DR based on ICDR scaleNoDCNN82.10%76.10%0.855Kappa score: 0.66
Arcadu, 2019 [103]7 FOV images of RIDE and RISE datasets2 step grading based on ETDRSNoDCNN: Inception v3NA66%77%0.68
Bellemo, 2019 [104]Kitwe Central Hospital, ZambiaGrade DR based on ICDR scaleNoDCNN: Ensemble of Adapted VGGNet & ResenetNARDR: 92.25%
vtDR: 99.42%
RDR: 89.04%RDR: 0.973
vt DR: 0.934
Chowdhury, 2019 [105]EyePACSGrade DR based on ICDR scaleYesDCNN: Inception v32 Class: 61.3%NANANA
Govindaraj, 2019 [106]MESSIDOR 1Detect DRYesProbabilistic Neural Network98%Almost 90% from chartAlmost 97% from chartF1 score: almost 0.97
Gulshan, 2019 [107]Aravind Eye Hospital and Sankara Nethralaya, IndiaGrade DR based on ICDR scaleNoDCNNNAAravind: 88.9%
SN: 92.1%
Aravind: 92.2%
SN: 95.2%
Quadratic weighted K scores:
Aravind: 0.85
SN: 0.91
Hathwar, 2019 [108]EyePACS, IDRIDDetect DRYesDCNN: Xception-TLNA94.30%95.50%Kappa score: 0.88
He, 2019 [109]IDRIDDetect DR grade and DME riskYesDCNN: AlexNetDR grade: 65%NANANA
Hua, 2019 [83]Kyung Hee University Medical CenterGrade DR based on ICDR scaleNoDCNN: Tri-SDN90.60%96.50%82.10%0.88
Jiang, 2019 [110]Beijing Tongren Eye CenterDR or Non-DRYesDCNN: Inception v3, Resnet152 and Inception-Resnet-v2Integrated model: 88.21%Integrated model: 85.57%Integrated model: 90.85%0.946
Li, 2019 [111]IDRID, MESSIDOR 1Grade DR based on ICDR scaleNoDCNN: Attention network based on ResNet50DR: 92.6%, DME: 91.2%DR: 92.0%, DME: 70.8%NADR: 0.96
DME: 0.92
Li, 2019 [61]Shanghai Zhongshan Hospital (SZH) and Shanghai First People’s Hospital (SFPH), China, MESSIDOR 2Grade DR based on ICDR scaleYesDCNN: Inception v393.49%96.93%93.45%0.9905
Metan, 2019 [112]EyePACSGrade DR based on ICDR scaleYesDCNN: ResNet81%NANANA
Nagasawa, 2019 [113]Saneikai Tsukazaki Hospital and Tokushima University Hospital, JapanDetect PDRYesDCNN: VGG-16NAPDR: 94.7%PDR: 97.2%PDR: 0.96
Qummar, 2019 [114]EyePACSGrade DR based on ICDR scaleYesDCNN: Ensemble of (Resnet50, Inception v3, Xception, Dense121, Dense169)80.80%51.50%86.72%F1 score: 0.53
Ruamviboonsuk, 2019 [115]Thailand national DR screening program datasetGrade DR based on ICDR and detect RDMENoDCNNNADR: 96.8%DR: 95.6%DR: 0.98
Sahlsten, 2019 [70]Private datasetDetect DR based on multiple grading systems, RDR and DMEYesDCNN: Inception-v3NA89.60%97.40%0.98
Sayres, 2019 [82]EyePACSGrade DR based on ICDRNoDCNN88.40%91.50%94.80%NA
Sengupta, 2019 [116]EyePACS, MESSIDOR 1Grade DR based on ICDR scaleYesDCNN: Inception-v390. 4%90%91.94%NA
Ting, 2019 [117]SiDRP, SiMES, SINDI, SCES, BES, AFEDS, CUHK, DMP Melb, with 2 FOVGrade DR based on ICDR scaleYesDCNNNANANADetect DR: 0.86
RDR: 0.96
Zeng, 2019 [118]EyePACSGrade DR based on ICDR scaleYesDCNN: Inception v3NA82.2%70.7%0.95
Ali, 2020 [57]Bahawal Victoria Hospital, Pakistan.Grade DR based on ICDR scaleYesML: SMO, Lg, MLP, LMT, Lg employed on selected post-optimized hybrid feature datasetsMLP: 73.73%
LMT: 73.00
SLg: 73.07
SMO: 68.60
Lg: 72.07%
NANAMLP: 0.916
LMT: 0.919
SLg: 0.921
SMO: 0.878
Lg: 0.923
Araujo, 2020 [119]EyePACS, MESSIDOR 2, IDRID, DMR, SCREEN-DR, DR1, DRIMDB, HRFGrade DR based on ICDR scaleYesDCNNNANANAKappa score:
EyePAC: 0.74
Chetoui, 2020 [26]EyePACS, MESSIDOR 1, 2, eOphta, UoA-DR from the University of Auckland research, IDRID, STARE, DIARETDB0, 1Grade DR based on ICDR scaleYesDCNN: Inception-ResNet v297.90%95.80%97.10%98.60%
Elswah, 2020 [74]IDRIDGrade DR based on ICDR scaleYesDCNN: ResNet 50 + NN or SVMNN: 88%
SVM: 65%
NANANA
Gadekallu, 2020 [71]DR Debrecen dataset collection of 20 features of MESSIDOR 1DR or Non-DRYesDCNN
ML: PCA + Firefly
97%92%95%NA
Gadekallu, 2020 [120]DR Debrecen datasetDetect DRYesML: PCA+ grey wolf optimization (GWO) + DNN97.30%91%97%NA
Gayathri, 2020 [121]MESSIDOR 1, EyePACS, DIARETDB0Grade DR based on ICDR scaleNAWavelet Transform, SVM, RFMESSIDOR 1: 99.75%MESSIDOR 1: 99.8%MESSIDOR 1: 99.9%NA
Jiang, 2020 [122]MESSIDOR 1Image-wise label the presence of MA, HE, Ex, CWSYesDCNN: ResNet 50 basedMA: 89.4%
HE: 98.9%
Ex: 92.8%
CWS: 88.6%
Normal: 94.2%
MA: 85.5%
HE: 100%
Ex: 93.3%
CWS: 94.6%
Normal: 93.9%
MA: 90.7%
HE: 98.6%
Ex: 92.7%
CWS: 86.8%
Normal: 94.4%
MA: 0.94
HE: 1
Ex: 0.97
CWS: 0.97
Normal: 0.98
Lands, 2020 [123]APTOS 2019, APTOS 2015Grade DR based on ICDR scaleYeDCNN: DensNet 16993%NANAKappa score: 0.8
Ludwig, 2020 [10]EyePACS, APTOS, MESSIDOR 2, EYEGODetect RDRYesDCNN: DenseNet201NAMESSIDOR 2: 87%MESSIDOR 2: 80%MESSIDOR 2: 0.92
Majumder, 2020 [15]EyePACS, APTOS 2019Grade DR based on ICDR scaleYesCNN88.50%NANANA
Memari, 2020 [124]MESSIDOR 1, HEI-MEDDetect DRYesDCNNNANANANA
Narayanan, 2020 [125]APTOS 2019Detect and grade DR based on ICDR scaleYesDCNN: AlexNet, ResNe, VGG16, Inception v398.4%NANA0.985
Pao, 2020 [84]EyePACSGrade DR based on ICDR scaleYesCNN: bichannel customized CNN87.83%77.81%93.88%0.93
Paradisa, 2020 [73]DIARETDB 1Grade DR based on ICDR scaleYesResNet-50 for extraction and SVM, RF, KNN, and XGBoost as classifiersSVM: 99%, KNN: 100%SVM: 99%, KNN: 100%NANA
Patel, 2020 [126]EyePACSGrade DR based on ICDR scaleYesDCNN: MobileNet v291.29%NANANA
Riaz, 2020 [80]EyePACS, MESSIDOR 2NAYesDCNNNAEyePACS: 94.0%EyePACS: 97.0%EyePAC: 0.98
Samanta, 2020 [127]EyePACSGrade DR based on ICDR scaleYesDCNN: DenseNet121 based84.1%NANANA
Serener, 2020 [128]EyePACS, MESSIDOR 1, eOphta, HRF, IDRIDGrade DR based on ICDR scaleYesDCNN: ResNet 18Country: EyePACS: 65%
Continent:
EyePACS + HRF: 80%
Country: EyePACS: 17%
Continent: EyePACS + HRF: 80%
Country: EyePACS: 89%
Continent: EyePACS + HRF: 80%
NA
Shaban, 2020 [129]APTOSGrade DR to non-DR, moderate DR, and severe DRYesDCNN88%87%94%0.95
Shankar, 2020 [85]MESSIDOR 1Grade DR based on ICDR scaleYesDCNN: Histogram-based segmentation + SDL99.28%98.54%99.38%NA
Singh, 2020 [130]IDRID, MESSIDOR 1Grade DME in 3 levelsYesDCNN: Hierarchical Ensemble of CNNs (HE-CNN)96.12%96.32%95.84%F1 score: 0.96
Thota, 2020 [131]EyePACSNAYesDCNN: VGG1674%80.0%65.0%0.80
Wang, 2020 [132]2 Eye hospitals, DIARETDB1, EyePACS, IDRIDMA, HE, EXYesDCNNMA: 99.7%
HE: 98.4%
EX: 98.1%
Grading: 91.79%
Grading: 80.58%Grading: 95.77%Grading: 0.93
Wang, 2020 [133]Shenzhen, Guangdong, ChinaGrade DR severity based on ICDR scale and detect MA, IHE, SRH, HE, CWS, VAN, IRMA, NVE, NVD, PFP, VPH, TRDNoDCNN: Multi-task network using channel-based attention blocksNANANAKappa score:
Grading: 0.80
DR feature: 0.64
Zhang, 2020 [134]3 Hospitals in ChinaClassify to retinal tear & retinal detachment, DR and pathological myopiaYesDCNN: InceptionResNetv293.73%91.22%96.19%F1 score: 0.93
Abdelmaksoud, 2021 [135]EyePACS, MESSIDOR 1, eOphta, CHASEDB 1, HRF, IDRID, STARE, DIARETDB0, 1 YesU-Net + SVM95.10%86.10%86.80%0.91
Bora, 2021 [115]EyePACSGrade DR based on ICDR scaleNoDCNN: Inception v3NANANAThree FOV: 0·79
One FOV: 0·70
Gangwar, 2021 [136]APTOS 2019, MESSIDOR 1Grade DR based on ICDR scaleYesDCNN: Inception ResNet v2APTOS:82.18%MESSIDOR 1: 72.33%NANANA
He, 2021 [137]DDR, MESSIDOR 1, EyePACSGrade DR based on ICDR scaleNoDCNN: MobileNet 1 with attention blocksMESSIDOR 1: 92.1%MESSIDOR 1: 89.2%MESSIDOR 1: 91%F1 score: MESSIDOR 1: 0.89
Hsieh, 2021 [32]National Taiwan University Hospital (NTUH), Taiwan, EyePACSDetect any DR, RDR and PDRYesDCNN: Inception v4 for any DR and RDR and ResNet for PDRDetect DR: 90.7%
RDR: 90.0%
PDR: 99.1%
Detect DR: 92.2%
RDR: 99.2%
PDR: 90.9%
Detect DR: 89.5%
RDR: 90.1%
PDR: 99.3%
0.955
Khan, 2021 [138]EyePACSGrade DR based on ICDR scaleYesDCNN: customized highly nonlinear scale-invariant network85%55.6%91.0%F1 score: 0.59
Oh, 2021 [2]7 FOV fundus images of Catholic Kwandong University, South KoreaDetect DRYesDCNN: ResNet 3483.38%83.38%83.41%0.915
Saeed, 2021 [139]MESSIDOR, EyePACSGrade DR based on ICDR scaleNoDCNN: ResNet GBEyePACS: 99.73%EyePACS: 96.04%EyePACS: 99.81%EyePACS: 0.98
Wang, 2021 [140]EyePACS, images from Peking Union Medical College Hospital, ChinaDetect RDR with lesion-based segmentation of PHE, VHE, NV, CWS, FIP, IHE, IRMA and MA, then staging based on ICDR scaleNoDCNN: Inception v3NAEyePACS: 90.60%EyePACS: 80.70%EyePACS: 0.943
Wang. 2021 [141]MESSIDOR 1Grade DR based on ICDR scaleYesDCNN: Multichannel-based GAN with semi super- visionRDR: 93.2%, DR Grading: 84.23%RDR: 92.6%RDR: 91.5%RDR: 0.96
Characteristics and evaluation of DR grading methods. In this table, the methods that have no preprocessing or common preprocessing are mentioned. In addition to the abbreviations described earlier, this table contains new abbreviations: Singapore integrated Diabetic Retinopathy Screening Program (SiDRP) between 2014 and 2015 (SiDRP 14–15), Singapore Malay Eye Study (SIMES), Singapore Indian Eye Study (SINDI), Singapore Chinese Eye Study (SCES), Beijing Eye Study (BES), African American Eye Study (AFEDS), Chinese University of Hong Kong (CUHK), and Diabetes Management Project Melbourne (DMP Melb) and Generative Adversarial Network (GAN).
Table 3. Classification-based studies in DR detection using a special preprocessing on fundus images for DR grading in ICDR scale.
Table 3. Classification-based studies in DR detection using a special preprocessing on fundus images for DR grading in ICDR scale.
Author, YearDatasetPre-Processing TechniqueMethodAccuracy
Datta, 2016 [142]DRIVE, STARE, DIARETDB0, DIARETDB1Yes, Contrast optimizationImage processingNA
Lin, 2018 [143]EyePACSYes, Convert to entropy imagesDCNNOriginal image: 81.8%
Entropy images: 86.1%
Mukhopadhyay, 2018 [144]Prasad Eye Institute, IndiaYes, Local binary patternsML: Decision tree, KNNKNN: 69.8%
Pour, 2020 [145]MESSIDOR 1, 2, IDRIDYes, CLAHEDCNN: EfficientNet-B5NA
Ramchandre, 2020 [146]APTOS 2019Yes, Image augmentation with AUGMIXDCNN: EfficientNetb3, SEResNeXt32x4dEfficientNetb3: 91.4%
SEResNeXt32x4d: 85.2%
Shankar, 2020 [85]MESSIDOR 1Yes, CLAHEDCNN: Hyperparameter Tuning Inception-v4 (HPTI-v4)99.5%
Bhardwaj, 2021 [147]DRIVE, STARE, MESSIDOR 1, DIARETDB1, IDRID, ROCYes, Image contrast enhancement and OD localizationDCNN: InceptionResNet v293.3%
Bilal, 2021 [16]IDRIDYes, Adaptive histogram equalization and contrast stretchingML: SVM + KNN + Binary Tree98.1%
Elloumi, 2021 [148]DIARETDB1Yes, Optic disc location, fundus image partitioningML: SVM, RF, KNN98.4%
AHE: Adaptive Histogram Equalization, CLAHE: Contrast Limited Adaptive Histogram Equalization.
Table 4. Classification-based studies in DR detection using OCT and OCTA.
Table 4. Classification-based studies in DR detection using OCT and OCTA.
Author, YearDatasetGrading DetailsPreprocessingMethodAccuracySensitivitySpecificityAUC
Eladawi, 2018 [149]OCTA images, University of Louisville, USADetect DRYesML: Vessel segmentation, Local feature extraction, SVM97.3%97.9%96.4%0.97
Islam, 2019 [150]Kermani OCT datasetNAYesDCNN: DenseNet 20198.6%0.9860.995NA
Le, 2020 [151]Private OCTA datasetGrade DRNoDCNN: VGG1687.3%83.8%90.8%0.97
Sandhu, 2020 [75]OCT. OCTA, clinical and demographical data, University of Louisville Clinical Center, USADetect mild and moderate DRYesML: RF96.0%100.0%94.0%0.96
Liu, 2021 [77]Private OCTA datasetDetect DRYesLogistic Regression (LR), LR regularized with the elastic net (LR-EN), SVM and XGBoostLR-EN: 80.0%LR-EN: 82.0%LR-EN: 84.0%LR-EN: 0.83
Table 5. Segmentation-based studies in DR detection using fundus images.
Table 5. Segmentation-based studies in DR detection using fundus images.
Author, YearDatasetConsidered LesionsPreprocessingSegmentation MethodSensitivity/SpecificityAUC
Imani, 2016 [157]DIARETDB1, HEI-MED, eOphtaExYesDynamic decision thresholding, morphological feature extraction, smooth edge removal89.01%/99.93%0.961
Shah, 2016 [154]ROCMAYesCurvelet transform and rule-based classifier48.2%/NANA
Quellec, 2017 [81]EyePACS, eOphta, DIARETDB1CWS, Ex, HE, MAYesDCNN: o-O solutionDIARETDB1:
CWS: 62.4%/NA
Ex: 55.2%/NA
HE: 44.9%/NA
MA: 31.6%/NA
EyePACS: 0.955
Huang, 2018 [155]MESSIDOR 1, DIARETDB0, 1NVYesELMNA/NAACC: 89.2%
Kaur, 2018 [156]STARE, eOphta, MESSIDOR 1, DIARETDB1, private datasetEx, CWSYesDynamic decision thresholding94.8%/99.80%ACC: 98.43%
Lam, 2018 [160]EyePACS, eOphtaEx, MA, HE, NVNADCNN: AlexNet, VGG16, GoogLeNet, ResNet, and Inception-v3NA/NAEyePACS: 0.99
ACC: 98.0%
Benzamin, 2018 [161]IDRIDExYesDCNN98.29%/41.35%ACC: 96.6%
Orlando, 2018 [162]eOphtha, DIARETDB1, MESSIDOR 1MA, HEYesML: RFNA/NA0.93
Eftekhari, 2019 [163]ROC, eOphtaMAYesDCNN: Two level CNN, thresholded probability mapNA/NAROC: 0.660
Wu, 2019 [164]HRFBlood vessels, optic disc and other regionsYesDCNN: AlexNet, GoogleNet, Resnet50, VGG19NA/NAAlexNet: 0.94
ACC: 95.45%
Yan, 2019 [165]IDRIDEx, MA, HE, CWSYesDCNN: Global and local UnetNA/NAEx: 0.889
MA: 0.525
HE: 0.703
CWS: 0.679
Qiao, 2020 [166]IDRIDMAYesDCNN98.4%/97.10%ACC: 97.8%
Wang, 2021 [141]EyePACS, images from Peking Union Medical College HospitalDetect RDR with lesion-based segmentation of PHE, VHE, NV, CWS, FIP, IHE, Ex, MANoDCNN: Inception v3 and FCN 32sPHE: 60.7%/90.9%
Ex: 49.5%/87.4%
VHE: 28.3%/84.6%
NV: 36.3%/83.7%
CWS: 57.3%/80.1%
FIP: 8.7%/78.0%
IHE: 79.8%/57.7%
MA: 16.4%/49.8%
NA
Wei, 2021 [63]EyePACSMA, IHE, VHE, PHE, Ex, CWS, FIP, NVYesDCNN: Transfer learning from Inception v3NA/NANA
Xu, 2021 [167]IDRIDEx, MA, HE, CWSYesDCNN: Enhanced Unet named FFUnetEx: 87.55%/NA
MA: 59.33%/NA
HE: 73.42%/NA
CWS: 79.33%/NA
IOU: Ex:0.84
MA: 0.56
HE: 0.73
CWS: 0.75
ICDR scale: International Clinical Diabetic Retinopathy scale, RDR: Referable DR, vtDR: vision threatening DR, PDR: Proliferative DR, NPDR: Non-Proliferative DR, MA: Microaneurysm, Ex: hard Exudate, CWS: Cotton Wool Spot, HE: Hemorrhage, FIP: Fibrous Proliferation, VHE: Vitreous Hemorrhage, PHE: Preretinal Hemorrhage, NV: Neovascularization.
Table 6. Segmentation-based studies in DR detection using OCT, OCTA images.
Table 6. Segmentation-based studies in DR detection using OCT, OCTA images.
Author, YearDatasetConsidered LesionsPre-ProcessingSegmentation MethodSensitivitySpecificityAUC
Guo, 2018 [159]UW-OCTA private datasetAvascular areaYesDCNNControl: 100.0% Diabetes without DR: 99.0% Mild to moderate DR: 99.0% Severe DR: 100.0%Control: 84.0% Diabetes without DR: 77.0% Mild to moderate DR: 85.0% Severe DR: 68.0%ACC:Control: 89.0% Diabetes without DR: 79% Mild to moderate DR: 87% Severe DR: 76.0%
ElTanboly, 2018 [168]OCT and OCTA images of University of Louisville12 different retinal layers & segmented OCTA plexusesNoSVMNANAACC: 97.0%
ElTanboly, 2018 [168]SD-OCT images of KentuckyLions Eye Center12 distinct retinal layersYesStatistical analysis and extraction of features such as tortuosity, reflectivity, and thickness for 12 retinal layersNANAACC: 73.2%
Sandhu, 2018 [169]OCT images of University of Louisville, USA12 layers; quantifies the reflectivity, curvature, and thicknessYesDCNN: 2 Stage deep CNN92.5%95.0%ACC: 93.8%
Holmberg, 2020 [158]OCT from Helmholtz Zentrum München, Fundus from EyePACSSegment retinal thickness map, Grade DR based on ICDR scaleNoDCNN:
On OCT: Retinal layer segmentation with Unet
On fundus: Self supervised learning, ResNet50
NANAIOU:
on OCT: 0.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lakshminarayanan, V.; Kheradfallah, H.; Sarkar, A.; Jothi Balaji, J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J. Imaging 2021, 7, 165. https://doi.org/10.3390/jimaging7090165

AMA Style

Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. Journal of Imaging. 2021; 7(9):165. https://doi.org/10.3390/jimaging7090165

Chicago/Turabian Style

Lakshminarayanan, Vasudevan, Hoda Kheradfallah, Arya Sarkar, and Janarthanam Jothi Balaji. 2021. "Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey" Journal of Imaging 7, no. 9: 165. https://doi.org/10.3390/jimaging7090165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop